<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>37912321-ead</externalid>
      <Title>Automotive Software Engineer - AUTOSAR BSW</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced Automotive Software Engineer to join our team in Gothenburg. As a key member of our software development team, you will work closely with system engineers and architects to design and develop robust software platforms for next-generation vehicles.</p>
<p>Your primary responsibilities will include configuring and generating AUTOSAR BSW modules using tools such as Vector DaVinci and EB tresos, ensuring smooth integration into our software platforms. You will also work in modern CI/CD environments with tools like Git, Gerrit, Jenkins, and Docker, contributing to stable and scalable build and integration flows.</p>
<p>A significant part of your role will involve supporting HIL/SIL testing and performing advanced fault analysis using tools such as Vector and Lauterbach Trace32, analyzing runtime behavior, memory usage, and integration challenges directly on target ECUs. Your work will be done in close collaboration with cross-functional teams, following MBSE principles and agile development methods to ensure robust, optimized, and future-proof software solutions.</p>
<p>To succeed in this role, you should have a strong background in software development, preferably with experience in embedded systems, and a good understanding of AUTOSAR methodology and related tools. You should also be proficient in C programming and have experience with communication protocols such as CAN, LIN, SPI, and Ethernet.</p>
<p>As a person, you should be comfortable working in complex technical environments, able to combine problem-solving skills with structured and analytical work. You should be curious, driven, and have a natural ability to see the big picture in a software platform, while also being willing to dive into details when needed. You should appreciate collaboration and knowledge sharing in cross-functional teams and communicate clearly and unpretentiously.</p>
<p>In return, we offer a dynamic and challenging work environment where you can grow, influence, and drive technological advancements forward – making a real difference for future mobility!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>AUTOSAR BSW, Vector DaVinci, EB tresos, Git, Gerrit, Jenkins, Docker, C programming, CAN, LIN, SPI, Ethernet, FPGA, Embedded systems, Automated testing</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>AVL</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.avl.com.png</Employerlogo>
      <Employerdescription>AVL is a global technology company that provides concepts, solutions, and methodologies in fields like vehicle development and integration, e-mobility, automated and connected mobility, and software.</Employerdescription>
      <Employerwebsite>https://jobs.avl.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.avl.com/job/Gothenburg-Automotive-Software-Engineer-AUTOSAR-BSW/1380419333/</Applyto>
      <Location>Gothenburg</Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>b7418570-627</externalid>
      <Title>Test Automation Engineer</Title>
      <Description><![CDATA[<p>Develop and evolve the existing test automation framework with a focus on maintainability, reliability, readability, scalability, and security.</p>
<p>Design, configure, and maintain CI/CD pipelines to support automated test execution.</p>
<p>Contribute actively within a Scrum and SAFe development environment.</p>
<p>Develop and automate functional and non-functional tests for both UI and API layers.</p>
<p>Create and maintain clear and comprehensive test documentation.</p>
<p>Work closely with business stakeholders to define and implement value-driven tests.</p>
<p>As a Test Automation Engineer at MHP, you will continuously grow with your projects and objectives in an innovative and supportive environment. You will be part of a team that values diversity, creativity, and unconventional thinking patterns.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing and evolving the existing test automation framework</li>
</ul>
<ul>
<li>Designing, configuring, and maintaining CI/CD pipelines</li>
</ul>
<ul>
<li>Contributing actively within a Scrum and SAFe development environment</li>
</ul>
<ul>
<li>Developing and automating functional and non-functional tests</li>
</ul>
<ul>
<li>Creating and maintaining clear and comprehensive test documentation</li>
</ul>
<ul>
<li>Working closely with business stakeholders to define and implement value-driven tests</li>
</ul>
<p>We offer a competitive salary and benefits package, as well as opportunities for professional growth and development. If you are a motivated and experienced Test Automation Engineer looking for a new challenge, please submit your application.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring, Spring Boot, SQL, JUnit 5, AssertJ, Selenium, Cucumber, RestAssured, Docker, Git, GitLab, Clean Code principles, pair programming, code reviews, load and performance testing, DevSecOps, Xray, Behavior-Driven Development, Elasticsearch</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>MHP</Employername>
      <Employerlogo>https://logos.yubhub.co/mhp.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitizes its customers&apos; processes and products, supporting them in their IT transformations along the entire value chain. With over 4,000 employees, MHP serves more than 300 customers worldwide.</Employerdescription>
      <Employerwebsite>http://www.mhp.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=17697</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>fb257514-ae0</externalid>
      <Title>Architect for Scalable AI Solutions</Title>
      <Description><![CDATA[<p>Are you enthusiastic about innovative technologies and Generative AI? Do you want to design architectures and make KI solutions productive, build scalable systems, and support customers in integrating modern AI? Then join our team and shape the future of KI-supported architectures, applications, and workflows with us.</p>
<p>Your tasks will include:</p>
<ul>
<li>Designing scalable KI architectures: developing high-performance architectures and integrating ML and GenAI models into customer environments (e.g., SAP, CRM, Microservices)</li>
<li>Implementing pipelines and workflows: building scalable data and AI architectures, integrating them into existing pipelines, and developing XOps solutions</li>
<li>Backend services and system integration: developing high-performance services to integrate models into productive workflows and ensuring smooth transitions between training, deployment, and application</li>
<li>Deployment, monitoring, and optimization: implementing prototypes and MVPs in cloud environments, optimizing performance, and ensuring scalability and security</li>
<li>Identifying use cases: analyzing business processes, recognizing potential for GenAI, and deriving technical solutions</li>
<li>Project and stakeholder management: moderating workshops, closely coordinating with interdisciplinary teams, international project partners, and customers</li>
</ul>
<p>To be well-prepared for your path, you should have the following qualifications:</p>
<ul>
<li>Completed studies in computer science, software engineering, data science, or a comparable field with at least 4 years of professional experience, ideally in consulting and (Gen)AI</li>
<li>Passion for AI and Generative AI, scalable systems, cloud technologies, and building high-performance AI infrastructure</li>
<li>Expertise in Python, ML, LLMs, RAG, cloud environments (Azure, AWS, GCP), Docker, Kubernetes, REST APIs, CI/CD</li>
<li>Knowledge in software architecture, cloud-native design, MLOps, and AI security</li>
<li>Your work style is characterized by self-responsibility, goal orientation, teamwork, and hands-on mentality</li>
</ul>
<p>Before departure:</p>
<ul>
<li>Start date: after agreement - always at the beginning of a month</li>
<li>Working hours: full-time (40 hours) and/or part-time possible; 30 vacation days</li>
<li>Employment relationship: unlimited</li>
<li>Field: consulting</li>
<li>Language: secure German and English</li>
<li>Flexibility and travel readiness</li>
<li>Other: valid work permit; if necessary, we can apply for a work permit within our recruitment process. The procedure takes time and affects the start date</li>
</ul>
<p>At MHP, you grow continuously in an innovative and supportive environment. This makes us the perfect sparring partner for your career. For both professional input and networking. We offer you:</p>
<ul>
<li>Appreciation. We support and appreciate colleagues as they are and celebrate our successes together</li>
<li>We are always happy about creativity and new impulses</li>
<li>Flexibility. Time-wise and location-wise - according to the project at home, in the office, or at the customer</li>
<li>You have the opportunity to grow with us in tasks, knowledge, and responsibility</li>
</ul>
<p>To apply, please submit your application as soon as possible. Simply online through our Job Locator. There, you can send your application documents, such as resume, certificates, and possibly project lists, in just a few clicks to us. A cover letter is not required.</p>
<p>By the way: If your application reaches us, our recruiting team checks across departments whether there is a suitable position for you. Irrespective of current job postings, we try to find the right job for you at MHP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>unspecified</Salaryrange>
      <Skills>Python, ML, LLMs, RAG, cloud environments, Docker, Kubernetes, REST APIs, CI/CD, software architecture, cloud-native design, MLOps, AI security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>MHP</Employername>
      <Employerlogo>https://logos.yubhub.co/mhp.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitalizes processes and products for its customers and accompanies them in their IT transformations along the entire value chain.</Employerdescription>
      <Employerwebsite>https://www.mhp.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=18795</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>61234903-9fa</externalid>
      <Title>Engineering Manager (Java or Typescript) - Guest Experience (all genders)</Title>
      <Description><![CDATA[<p>Join our Guest Experience department as an Engineering Manager, leading a dynamic team focused on enhancing the search experience of our users.</p>
<p>As an Engineering Manager, you will be part of the Discovery team in the Guest Experience department. The team is responsible for designing and maintaining the list page of our website, ensuring users can easily find the best vacation rental from our search results.</p>
<p>Your contributions will help create a seamless and joyful journey for travellers, which will result in increasing conversion rates and customer satisfaction.</p>
<p>Your team will consist of frontend &amp; backend engineers (direct reports), a project manager and a QA engineer.</p>
<p>You&#39;ll work closely with the Ranking, Conqueror, and Marketing teams, which manage the machine learning models for property ranking on the list page, booking systems, and Holidu&#39;s marketing efforts. Together, you&#39;ll ensure a seamless and cohesive user experience.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Frontend: Typescript and NodeJS processes in Kubernetes. We use ReactJS, Zustand and TailwindCSS on the client and Express on the server.</li>
</ul>
<ul>
<li>Backend: Java 17/21, Kotlin (Spring Boot).</li>
</ul>
<ul>
<li>Infrastructure: Microservices architecture deployed on AWS Kubernetes (EKS).</li>
</ul>
<ul>
<li>Data Management: PostgreSQL, Redis, Elasticsearch 7, Redshift (part of a data lake structure).</li>
</ul>
<ul>
<li>DevOps Tools: AWS, Docker, Jenkins, Git, Terraform.</li>
</ul>
<ul>
<li>Monitoring &amp; Analytics: ELK, Grafana, Looker, Opsgenie, and in-house solutions.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<ul>
<li>Lead a high-performing cross-functional team, focusing on product innovation, infrastructure reliability, delivery speed, quality, engineering culture, and team growth.</li>
</ul>
<ul>
<li>Ensure your team delivers applications that are highly scalable, highly available, and capable of handling high traffic of up to 1 million unique users per day.</li>
</ul>
<ul>
<li>Support team growth through regular feedback, mentorship, and by recruiting exceptional engineers.</li>
</ul>
<ul>
<li>Work closely with product management, product design, and stakeholders to define the team&#39;s goals (OKR’s) and roadmap.</li>
</ul>
<ul>
<li>Collaborate with peers, staff engineers, and other stakeholders to drive strategic technology decisions.</li>
</ul>
<ul>
<li>Lead strategic team-driven projects, identify opportunities, define and uphold quality standards.</li>
</ul>
<ul>
<li>Foster a great team culture aligned with the company values, ownership, autonomy, and inclusivity within your team and the entire department.</li>
</ul>
<ul>
<li>Take full responsibility for delivering impactful features to millions of users annually.</li>
</ul>
<p>The role includes dedicating approximately 40-50% of the time as an individual contributor focused on feature implementation.</p>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>A bachelor&#39;s degree in Computer Science, a related technical field or equivalent practical experience.</li>
</ul>
<ul>
<li>Experience building and implementing backend services and/or frontend applications.</li>
</ul>
<ul>
<li>Experience providing technical leadership (e.g., setting goals and priorities, architecture design, task planning and code reviews).</li>
</ul>
<ul>
<li>Experience as a people manager with the ability to build an excellent team culture based on mutual respect, empathy, learning and support for each other.</li>
</ul>
<ul>
<li>Love for building world-class products with a great user experience.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters,and you’ll see the impact.</li>
</ul>
<ul>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets,with a strong focus on AI.</li>
</ul>
<ul>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts,people we can all relate to,making work meaningful and energizing.</li>
</ul>
<ul>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
</ul>
<ul>
<li>Flexibility:  Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
</ul>
<ul>
<li>Competitive Package: 95.000-125.000€ + VSOPs based on relevant experience and seniority , learn more about our approach to compensation here.</li>
</ul>
<ul>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized,but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>95.000-125.000€ + VSOPs based on relevant experience and seniority</Salaryrange>
      <Skills>Typescript, NodeJS, ReactJS, Zustand, TailwindCSS, Express, Java, Kotlin, Spring Boot, AWS, Docker, Jenkins, Git, Terraform, PostgreSQL, Redis, Elasticsearch, Redshift, ELK, Grafana, Looker, Opsgenie</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search and booking services for vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/1558189</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>87749959-700</externalid>
      <Title>Intern Data Engineering (all genders)</Title>
      <Description><![CDATA[<p>Join our Data Engineering team inside the Business Intelligence department, where you&#39;ll work with experienced engineers to build the data foundation that powers Holidu&#39;s growth.</p>
<p>As an intern, you&#39;ll get hands-on experience with real problems and have the opportunity to make a meaningful impact. You&#39;ll work on building and supporting data pipelines, digging into data quality, getting hands-on with cloud infrastructure, and exploring AI-assisted development.</p>
<p>Our team uses a range of technologies, including Redshift, Athena, DuckDB, Terraform, Docker, Jenkins, ELK, Grafana, Looker, OpsGenie, Kafka, Airbyte, and Fivetran. You&#39;ll have the chance to learn from experienced engineers and contribute to the development of our data systems.</p>
<p>In this role, you&#39;ll be part of a team that genuinely loves what they do and is passionate about building a better data foundation for Holidu. You&#39;ll have the opportunity to take responsibility from day one and develop through regular feedback.</p>
<p>We offer a fair salary, the chance to make a difference for hundreds of thousands of monthly users, and the opportunity to grow and develop through regular feedback. You&#39;ll also have access to a range of benefits, including a hybrid work policy, the chance to work from other local offices, and a corporate subscription to Urban Sports Club or a premium gym membership at a discounted rate.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Internship</Jobtype>
      <Experiencelevel>intern</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Git, Airflow, dbt, Docker, Cloud platform (AWS, GCP, etc.), LLM tools, AI-assisted coding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides search engines for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2557398</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64bb6566-575</externalid>
      <Title>Senior ‘Developer Infrastructure’ Engineer</Title>
      <Description><![CDATA[<p>The GALAXY Platform Execution &amp; Exchange Data (SPEED) Team is a core part of Millennium&#39;s technology organisation, powering the firm&#39;s lowest-latency solutions for systematic and high-frequency trading.</p>
<p>SPEED delivers the live trading and market-data platforms used by portfolio managers and risk systems, including Latency Critical Trading (LCT), DMA OMS (Client Direct), DMA market data feeds, packet capture (PCAPs), enterprise market data, and intraday data services across latency tiers from sub-100 nanoseconds to millisecond-sensitive workflows.</p>
<p>As a Senior Developer Infrastructure Engineer on SPEED, you will own and evolve the build and CI/CD infrastructure that underpins these mission-critical systems.</p>
<p>By designing scalable build pipelines, shared tooling, and reliable release workflows, you will directly enhance developer productivity and enable fast, safe iteration on some of the firm&#39;s most performance-sensitive code.</p>
<p>This role offers the opportunity to shape core engineering practices while contributing to platforms that are central to Millennium&#39;s trading edge.</p>
<p>Principal Responsibilities</p>
<ul>
<li>Design, build, and maintain a highly scalable, parallel, and cached build system for a large, performance-sensitive codebase.</li>
</ul>
<ul>
<li>Own and continually optimise CI/CD pipelines to minimise build/test times, reduce flakiness, and improve developer productivity.</li>
</ul>
<ul>
<li>Operate with an AI-first mindset across the SDLC, using automation by default to streamline build, test, and release workflows.</li>
</ul>
<ul>
<li>Integrate and operationalise AI tools (e.g., copilots, workflow automation, AI-driven analytics) to eliminate manual toil, accelerate development, and codify reusable AI-enabled patterns for the broader engineering organisation.</li>
</ul>
<ul>
<li>Design and operate containerised environments (e.g., Docker, Kubernetes) to maximise utilisation, reliability, and scalability across environments.</li>
</ul>
<ul>
<li>Implement and manage artifact storage, dependency management, and versioning strategies for large, distributed systems.</li>
</ul>
<ul>
<li>Develop and maintain shared libraries, CLIs, scripts, and internal platforms that reduce friction and enable self-service for engineers.</li>
</ul>
<ul>
<li>Build and enhance test suites and environment provisioning, leveraging AI and automation where appropriate for smarter checks, triage, and observability.</li>
</ul>
<ul>
<li>Monitor, instrument, and improve the reliability, observability, and performance of build and CI/CD systems using metrics, dashboards, and alerting.</li>
</ul>
<ul>
<li>Partner with trading and engineering teams to understand requirements, remove friction, and champion best practices for building, testing, and releasing software.</li>
</ul>
<p>Qualifications/Skills Required</p>
<ul>
<li>5+ years of software engineering or DevInfra/Platform/DevOps experience, with significant focus on building systems and CI/CD.</li>
</ul>
<ul>
<li>Strong programming skills in one or more languages (e.g., Python, Rust, Go, C++) for automation and tooling.</li>
</ul>
<ul>
<li>Hands-on experience with at least one modern build system (e.g., Bazel, Buck2).</li>
</ul>
<ul>
<li>Solid understanding of source control (Git), branching strategies, and release management.</li>
</ul>
<ul>
<li>Experience with monorepos is a plus.</li>
</ul>
<ul>
<li>Experience scaling build and test infrastructure for growing codebases and teams (parallelization, test sharding, remote execution, caching).</li>
</ul>
<ul>
<li>Experience designing or participating in processes, systems, or playbooks that leverage AI to streamline work rather than needing to add more headcount to the team.</li>
</ul>
<ul>
<li>Familiarity with containers and cloud infrastructure (Docker, Kubernetes, and major cloud providers such as AWS/GCP/Azure).</li>
</ul>
<ul>
<li>Strong communication and collaboration skills; comfortable partnering with multiple teams and driving cross-cutting initiatives.</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Python, Rust, Go, C++, Bazel, Buck2, Git, Kubernetes, Docker, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a company that provides equities, quant strategies, and shared services technology.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954695574</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7e58f60-5fa</externalid>
      <Title>Software Engineer - Learning Engineering and Data (LEaD) Program</Title>
      <Description><![CDATA[<p>As a member of our Miami-based Learning Engineering and Data (LEaD) program, you will work alongside technology mentors and leaders to develop and maintain applications and tools spanning front-office, middle-office, and back-office functions in a dynamic and fast-paced environment.</p>
<p>Our technology teams are looking for Software Engineers with C++, Python, or Java to design, implement, and maintain systems supporting our technology business functions.</p>
<p>Candidate is expected to:</p>
<ul>
<li>Work closely with technology teams to develop requirements and specifications for varying projects</li>
<li>Take part in the development and enhancement of the backend distributed system</li>
<li>Apply AI/ML (deep learning, natural language processing, large language models) to practical and comprehensive technology solutions</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>2-5 years of experience working with C++, Python, or Java</li>
<li>Experience with ML libraries, Pandas, NumPy, FastAPI (Python), Boost (C++), Spring Boot (Java)</li>
<li>Must be comfortable working in both Unix/Linux and Windows environments</li>
<li>Good understanding of various design patterns</li>
<li>Strong analytical and mathematical skills along with an interest/ability to quickly learn additional languages and quantitative concepts</li>
<li>Solid communication skills</li>
<li>Able to work collaboratively in a fast-paced environment with a passion to solving complex problems</li>
<li>Detail oriented, organized, demonstrating thoroughness and strong ownership of work</li>
</ul>
<p>Desirable Skills/Knowledge:</p>
<ul>
<li>Bachelor or Master&#39;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field</li>
<li>Demonstrable passion for developing LLM-powered products whether that is through commercial experience or open source/academic projects you have worked on in your own time</li>
<li>Hands-on experience building ML and data pipeline architectures</li>
<li>Understanding of distributed messaging systems</li>
<li>Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)</li>
<li>Experience with relational and non-relational database platforms</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Python, Java, ML libraries, Pandas, NumPy, FastAPI, Boost, Spring Boot, Bachelor or Master&apos;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field, Demonstrable passion for developing LLM-powered products, Hands-on experience building ML and data pipeline architectures, Understanding of distributed messaging systems, Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred), Experience with relational and non-relational database platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>IT LEad Program</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a large global alternative investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953879362</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d7fadcc-6fa</externalid>
      <Title>Data Scientist Computer Vision</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a talented Data Scientist with deep learning and machine learning expertise focused on image-based data to help shape the future of agriculture. In this role, you&#39;ll join a dynamic team that supports the development of Bayer Crop Science next-generation products by applying computer vision to automate critical processes across the Plant Biotechnology organisation.</p>
<p>The primary responsibilities of this role are to:</p>
<p>Solve real agricultural problems using deep learning and AI across image and other data modalities, translating complex models into tangible business and scientific impact.</p>
<p>Design and implement end-to-end machine learning pipelines for computer vision use cases, including segmentation, classification, detection, and multi-task learning.</p>
<p>Prototype, evaluate, and iterate on cutting-edge architectures such as CNNs, Vision Transformers, foundational and large-scale vision models, ensuring state-of-the-art performance.</p>
<p>Optimize models for accuracy, robustness, and inference efficiency, including experimentation with hyperparameters, compression, and deployment-oriented optimisations.</p>
<p>Independently build scalable data pipelines for training, validation, and evaluation, including data ingestion, augmentation strategies, and active learning loops.</p>
<p>Collaborate cross-functionally with product, data, and software engineering teams to integrate models into production systems and deliver reliable, maintainable solutions.</p>
<p>Contribute to MLOps practices, including model versioning, deployment, monitoring, and retraining workflows using modern tooling and cloud-based platforms.</p>
<p>Build strong cross-functional relationships and actively engage with the broader Data Science Community to share best practices, align on standards, and co-create innovative solutions.</p>
<p>Present clear, compelling, and validated stories about experiments, results, and recommendations to peers, senior management, and internal customers to drive strategic and operational decisions.</p>
<p>We seek an incumbent who possesses the following:</p>
<p>M.S. with 2+ years of experience or Ph.D. in Computer Science, Electrical Engineering, or a related field with a focus on machine learning or computer vision.</p>
<p>Proficiency in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.</p>
<p>Hands-on experience with modern computer vision architectures including models such as ResNet, UNet, DeepLab, YOLO, SegFormer, SAM, and Vision Transformers.</p>
<p>Strong background in handling large-scale datasets and creating custom datasets, for example using frameworks such as Hugging Face Datasets.</p>
<p>Solid understanding of core machine learning concepts including loss functions, regularization, optimisation, and learning rate scheduling.</p>
<p>Experience developing and deploying models using cloud-based ML platforms such as AWS SageMaker.</p>
<p>Familiarity with Unix environments, including bash, file systems, and core utilities.</p>
<p>Strong engineering practices including use of Git, Docker, CI/CD pipelines, modular codebase design, and unit testing.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$109,370.40 - $164,055.60</Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, ResNet, UNet, DeepLab, YOLO, SegFormer, SAM, Vision Transformers, Hugging Face Datasets, AWS SageMaker, Git, Docker, CI/CD pipelines, modular codebase design, unit testing</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company with a presence in over 100 countries.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976908666</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f7aeee90-9b7</externalid>
      <Title>Technical Specialist (Java, Microservices) / Associate Director, Software Engineering</Title>
      <Description><![CDATA[<p>Join HSBC and help you stand out in your career. We offer opportunities, support and rewards that will take you further. As an Associate Director, Software Engineering, you will lead the development and implementation of Microservices-based solutions using Java. You will also architect and design scalable, distributed systems with high availability, collaborate with cross-functional teams to gather requirements and deliver solutions, ensure code quality through best practices, code reviews, and automated testing, mentor and guide team members in technical aspects and career growth, troubleshoot and resolve complex technical issues in production environments, stay updated with emerging technologies and recommend their adoption, navigate a dynamic ecosystem to deliver change effectively, demonstrating initiative, self-motivation, and drive, and exhibit tenacity and determination to clarify business requirements and deliver solutions in occasionally challenging circumstances.</p>
<p>To be successful in this role, you should have strong proficiency in Java (Java 21 preferred), hands-on experience with Microservices architecture and frameworks (e.g., Spring Boot, Spring Cloud), expertise in RESTful APIs, messaging systems (e.g., Kafka, Hazelcast), and containerization (e.g., Docker, Kubernetes), solid understanding of cloud platforms (e.g., Kubernetes platform, GCP and AWS), hands-on experience with CI/CD pipelines and DevOps practices, knowledge of database technologies (SQL and NoSQL), payment&#39;s domain experience and clearing scheme experience, excellent problem-solving and communication skills, hands-on experience in both SDLC and Agile methodologies, familiarity with monitoring tools (e.g., Prometheus, Grafana, Splunk), and certifications in Java or cloud technologies are a plus.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Microservices architecture, Spring Boot, Spring Cloud, RESTful APIs, Kafka, Hazelcast, Docker, Kubernetes, CI/CD pipelines, DevOps practices, database technologies, SQL, NoSQL, payment&apos;s domain experience, clearing scheme experience</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610662228</Applyto>
      <Location>Hyderabad, Telangana, India · Bangalore, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aee9464f-897</externalid>
      <Title>Technical Specialist (Java, Microservices) / Associate Director, Software Engineering</Title>
      <Description><![CDATA[<p>We are currently seeking an experienced professional to join our team in the role of a Associate Director, Software Engineering.</p>
<p>In this role, you will lead the development and implementation of Microservices-based solutions using Java. You will also architect and design scalable, distributed systems with high availability, collaborate with cross-functional teams to gather requirements and deliver solutions, ensure code quality through best practices, code reviews, and automated testing, mentor and guide team members in technical aspects and career growth, troubleshoot and resolve complex technical issues in production environments, stay updated with emerging technologies and recommend their adoption, navigate a dynamic ecosystem to deliver change effectively, demonstrating initiative, self-motivation, and drive, exhibit tenacity and determination to clarify business requirements and deliver solutions in occasionally challenging circumstances.</p>
<p>To be successful in this role, you should meet the following requirements:</p>
<ul>
<li>Strong proficiency in Java (Java 21 preferred).</li>
<li>Hands-on experience with Microservices architecture and frameworks (e.g., Spring Boot, Spring Cloud).</li>
<li>Expertise in RESTful APIs, messaging systems (e.g., Kafka, Hazelcast), and containerization (e.g., Docker, Kubernetes).</li>
<li>Solid understanding of cloud platforms (e.g., Kubernetes platform, GCP and AWS).</li>
<li>Hands-on experience with CI/CD pipelines and DevOps practices.</li>
<li>Knowledge of database technologies (SQL and NoSQL).</li>
<li>Payment&#39;s domain experience and clearing scheme experience.</li>
<li>Excellent problem-solving and communication skills.</li>
<li>Hands-on experience in both SDLC and Agile methodologies.</li>
<li>Familiarity with monitoring tools (e.g., Prometheus, Grafana, Splunk).</li>
<li>Certifications in Java or cloud technologies are a plus.</li>
</ul>
<p>You&#39;ll achieve more when you join HSBC.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Microservices, Spring Boot, Spring Cloud, RESTful APIs, Kafka, Hazelcast, Docker, Kubernetes, CI/CD pipelines, DevOps practices, database technologies, SQL, NoSQL, payment&apos;s domain experience, clearing scheme experience</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610662222</Applyto>
      <Location>Bangalore, Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>52261e57-a37</externalid>
      <Title>Senior Software Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p>You&#39;ll work with modern tooling, a cross-functional team, and teammates who care deeply about impact, collaboration, and learning together.</p>
<p>As a Senior Software Engineer - Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p>Your key responsibilities will include:</p>
<ul>
<li>Supporting model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
</ul>
<ul>
<li>Building and operating production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
</ul>
<ul>
<li>Collaborating cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
</ul>
<ul>
<li>Owning infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
</ul>
<ul>
<li>Ensuring operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
</ul>
<ul>
<li>Migrating and productionizing POC: turn experimental code into robust, maintainable Python applications.</li>
</ul>
<ul>
<li>Ensuring data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p>You don&#39;t need to meet every requirement , we&#39;re looking for strong fundamentals, ownership, and the motivation to grow.</p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
</ul>
<ul>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
</ul>
<ul>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
</ul>
<ul>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
</ul>
<ul>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
</ul>
<ul>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>
<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>
<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>
<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>
<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>
<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Infrastructure-as-code, Cloud platforms, ML model deployment, LLM tools and agents, Data science models, Reliable and scalable production systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a company that provides a platform for hosting and booking accommodations.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597551</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a277a7cc-202</externalid>
      <Title>Staff Frontend Developer - Guest Experience (all genders)</Title>
      <Description><![CDATA[<p><strong>Our Current Itinerary</strong></p>
<p>Are you ready to shape the future of travel tech at scale? We are seeking an exceptional Staff Frontend Developer to drive technical excellence across our entire booking funnel.</p>
<p>We&#39;re among the leading travel tech companies worldwide, growing substantially and sustainably year after year, with a mission to make vacation home booking and hosting decisions stress-free and packed with joy.</p>
<p>Our vibrant team of over 600 talented individuals from 60+ countries shares a passion for cutting-edge technology, constant improvement, and creating exceptional experiences for our 50,000 hosts and 100 million website users each year.</p>
<p><strong>Your Future Team</strong></p>
<p>As a Staff Frontend Engineer, you&#39;ll be the technical authority across all teams in the booking funnel , from the Discovery team&#39;s list pages all the way through the checkout funnel to the Post Booking experience.</p>
<p>You&#39;ll design and implement overarching frontend architecture that scales to handle millions of users, while establishing best practices that elevate the entire engineering department.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Core Technologies: TypeScript, ReactJS, NodeJS, Zustand, TailwindCSS, Express, Vite, SSR.</li>
<li>Data Infrastructure: DynamoDB, Redis.</li>
<li>Cloud &amp; DevOps: AWS, Kubernetes, Docker, Jenkins, Git.</li>
<li>Monitoring &amp; Analytics: Sentry, ELK, Grafana, Looker, OpsGenie, and internally developed technologies.</li>
</ul>
<p><strong>Technical Leadership &amp; Strategy</strong></p>
<ul>
<li>Define the technical vision and strategy for the frontend engineers of GX department, aligning with organizational goals and anticipating industry trends.</li>
<li>Architect scalable, high-availability frontend systems serving 1M+ daily users across the entire booking funnel.</li>
<li>Lead the design and implementation of department-wide technical initiatives that impact conversion rates, customer satisfaction, and technical excellence.</li>
</ul>
<p><strong>Cross-Team Collaboration &amp; Influence</strong></p>
<ul>
<li>Partner with Engineering Managers and Department Leaders to shape the technical roadmap.</li>
<li>Contribute to specifications for large-scale projects, organizing parallel workstreams that reassemble into cohesive launches.</li>
</ul>
<p><strong>Technical Excellence &amp; Innovation</strong></p>
<ul>
<li>Establish, iterate on, and enforce engineering best practices (testing, documentation, architecture) department-wide.</li>
<li>Review code and set quality standards that become the gold standard across teams.</li>
</ul>
<p><strong>Mentorship &amp; Knowledge Leadership</strong></p>
<ul>
<li>Mentor senior developers, helping them grow into technical leaders.</li>
<li>Lead department-wide knowledge sharing initiatives and technical workshops.</li>
</ul>
<p><strong>Your Backpack is Filled with</strong></p>
<ul>
<li>8+ years of frontend development experience with deep expertise in JavaScript (ES6+), TypeScript, and ReactJS.</li>
<li>Proven track record of architecting large-scale frontend applications handling millions of users.</li>
<li>Expert-level proficiency with state management, performance optimization, and modern build tools.</li>
</ul>
<p><strong>Leadership &amp; Strategic Thinking</strong></p>
<ul>
<li>Demonstrated ability to define and execute technical strategies at department or company level.</li>
<li>Experience leading cross-functional initiatives and influencing without direct authority.</li>
</ul>
<p><strong>Business &amp; Domain Knowledge</strong></p>
<ul>
<li>Ability to connect technical decisions to business KPIs and department goals.</li>
<li>Experience working closely with product and business stakeholders at all levels.</li>
</ul>
<p><strong>Our Adventure Includes</strong></p>
<ul>
<li>Strategic Impact: Shape the technical direction of a rapidly growing travel tech leader.</li>
<li>Technical Excellence: Work with cutting-edge technologies and influence architectural decisions.</li>
<li>Leadership Growth: Lead initiatives that impact millions of users and mentor the next generation of engineers.</li>
</ul>
<p><strong>Want to Travel with Us?</strong></p>
<p>Take a peek into our culture on Instagram @lifeatholidu and check out Tech at Holidu to meet the people behind the product.</p>
<p>Apply now and let’s make vacation dreams come true – at scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>95.000-125.000€ + VSOPs based on relevant experience and seniority</Salaryrange>
      <Skills>JavaScript, TypeScript, ReactJS, NodeJS, Zustand, TailwindCSS, Express, Vite, SSR, DynamoDB, Redis, AWS, Kubernetes, Docker, Jenkins, Git, Sentry, ELK, Grafana, Looker, OpsGenie</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a leading travel tech company that provides vacation home booking and hosting services. It has a team of over 600 individuals from 60+ countries.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2247550</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f156ea4b-6a3</externalid>
      <Title>Senior DataOps Engineer / Software Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p>Join our Dynamic Pricing &amp; Revenue Management team as a Senior DataOps Engineer / Software Engineer. You&#39;ll work alongside a Data Scientist and a Data Analyst to develop a smart, dynamic, and data-driven pricing strategy. Our team uses modern tooling, including S3, Redshift, Athena, DuckDB, MLflow, SageMaker, Terraform, Docker, Jenkins, and AWS EKS.</p>
<p>As a Senior DataOps Engineer / Software Engineer, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You&#39;ll bridge the gap between data science models and reliable, scalable production systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Supporting model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Building and operating production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborating cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Owning infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensuring operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrating and productionizing POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensuring data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p>We&#39;re looking for someone with 4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps. You should have strong hands-on skills in Python, experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform), familiarity with cloud platforms (AWS preferred), and deploying services in production. Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</p>
<p>Our team is passionate about using cutting-edge LLM tools and agents to improve productivity. We&#39;re looking for someone who is proactive, hands-on, and takes ownership of problems and drives solutions forward.</p>
<p>Benefits include:</p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment with a pace of a scale-up combined with the stability of a proven business model.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p>If you&#39;re interested in joining our team, apply online on our careers page! Your first travel contact will be Katharina from HR.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Infrastructure-as-code, Cloud platforms, Deploying services in production, ML model deployment, LLM tools and agents</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH operates a platform for holiday rentals, connecting hosts with guests worldwide.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2523360</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b447835-74a</externalid>
      <Title>Senior DataOps Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p><strong>Your future team</strong></p>
<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>
<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>
<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>
<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>
<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>
<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p><strong>How to apply</strong></p>
<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage &amp; Querying (S3, Redshift, Athena, DuckDB), ML &amp; Model Serving (MLflow, SageMaker, deployment APIs), Cloud &amp; DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a technology company that provides a platform for hosts to manage their properties and connect with guests.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597559</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5d48ddb1-b45</externalid>
      <Title>Mission Software Engineering Manager, Public Sector</Title>
      <Description><![CDATA[<p>We are looking for a Mission Software Engineering Manager to join our dynamic Federal Engineering team. As a part of this team, you will play a critical role in supporting Scale&#39;s government customers by scoping and developing onsite solutions.</p>
<p>Our scalable, high-performance platform is the foundation for these customer solutions, and your expertise will be instrumental in designing and implementing systems that can handle interactions with existing customer systems to help our products integrate into existing customer workflows.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Recruit a high-performing engineering team.</li>
<li>Drive engineering productivity. Provide guidance, mentorship, and technical leadership to a team of engineers working on Generative AI projects.</li>
<li>Collaborating with cross-functional teams to define, design, and execute strategic roadmap.</li>
<li>Work directly with customers to understand their problems and translate those into features in Scale’s platform.</li>
<li>Be open to ~25% travel or relocation to a key customer geographic location.</li>
<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
<li>Implement end-to-end data integrations, syncing customer’s data to Scale’s platform and back.</li>
<li>Deploy and maintain Scale software at customer sites</li>
<li>Develop customer requested features and work closely with them to ensure that they win customer love.</li>
<li>Build robust and reliable backend systems that can serve as standalone products, empowering customers to accelerate their own AI ambitions.</li>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of full-time engineering experience, post-graduation</li>
<li>2+ years of prior engineering management or equivalent experience and has managed an engineering team.</li>
<li>Track record of success as a hybrid customer facing engineer, forward deployed software engineer, and ability to quickly adapt to different roles.</li>
<li>Prior experience developing with Python and JavaScript, or other modern software languages. Familiarity with Node and React is a plus.</li>
<li>Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus</li>
<li>Linux experience: Understanding of shell scripting, operating systems, etc</li>
<li>Networking experience: Understanding of networking technologies, configuration (ports, protocols, etc)</li>
<li>Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles</li>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles</li>
<li>Understand unique DoD and USG constraints when it comes to technology</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$273,700-$341,550 USD</Salaryrange>
      <Skills>Python, JavaScript, Cloud-Native Technologies, Linux, Networking, Data Engineering, Problem Solving, Node, React, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4631039005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>90b5ac1d-d16</externalid>
      <Title>Senior Software Engineer, Backend — Frontier Data</Title>
      <Description><![CDATA[<p>The Frontier Data team builds the data and systems that power Scale&#39;s most advanced Frontier AI use cases. We&#39;re looking for a Senior Backend Engineer who thrives in ambiguity, moves fast, and enjoys tackling daunting challenges.</p>
<p>As a Senior Backend Engineer, you will own major backend systems for frontier agentic data products, driving projects from early exploration through production deployment. You will build scalable services and pipelines that support agent workflows, architect modular, reusable backend systems, and operate in high-ambiguity environments.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and building scalable systems while partnering closely with research, product, operations, and other engineering teams</li>
<li>Building scalable services and pipelines that support agent workflows</li>
<li>Architecting modular, reusable backend systems that adapt to evolving product needs</li>
<li>Operating in high-ambiguity environments and breaking down open-ended problems</li>
<li>Partnering cross-functionally with product, research/ML, and infrastructure teams</li>
</ul>
<p>Ideal experience includes 5+ years of full-time software engineering experience, strong backend engineering fundamentals, and experience building systems that scale.</p>
<p>Compensation packages at Scale include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors.</p>
<p>Additional benefits include comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Distributed systems, API design, Data modeling, Production reliability, Docker, Containerized development/production environments, SQL, Modern database-backed application development, Async processing, Workflow engines, Data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Frontier Data</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Frontier Data develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4648525005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c19e39af-feb</externalid>
      <Title>Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Collaborate with senior engineers to implement features for public sector clients, including spending time with the client to understand user feedback and assist with delivery.</li>
<li>Develop and maintain full-stack components that integrate with AI models, focusing on building responsive UIs and reliable backend APIs.</li>
<li>Assist in deploying and monitoring applications within cloud environments, ensuring basic system stability and security.</li>
<li>Help build and refine reusable features that support diverse international client use cases.</li>
<li>Work within a multi-disciplinary team of design, product, and data specialists to build robust features that follow established technical architectures.</li>
</ul>
<p><strong>Ideal Candidate:</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>Professional full-stack experience with a focus on React, TypeScript, and Python/Node.js. Familiarity with Next.js and NoSQL/Relational databases, along with exposure to containerization (Docker) and cloud deployments.</li>
<li>Experience building and deploying web applications with a good understanding of cloud fundamentals and scalable coding practices.</li>
<li>A self-starting approach to navigate ambiguous requirements and deliver reliable software.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Python, Node.js, Next.js, NoSQL/Relational databases, containerization (Docker), cloud deployments, Arabic, experience working cross functionally with operations, experience building solutions with LLMs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676602005</Applyto>
      <Location>Dubai, UAE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2d16873c-e17</externalid>
      <Title>Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Collaborate with senior engineers to implement features for public sector clients, including spending time with the client to understand user feedback and assist with delivery.</li>
<li>Develop and maintain full-stack components that integrate with AI models, focusing on building responsive UIs and reliable backend APIs.</li>
<li>Assist in deploying and monitoring applications within cloud environments, ensuring basic system stability and security.</li>
<li>Help build and refine reusable features that support diverse international client use cases.</li>
<li>Work within a multi-disciplinary team of design, product, and data specialists to build robust features that follow established technical architectures.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>Professional full-stack experience with a focus on React, TypeScript, and Python/Node.js. Familiarity with Next.js and NoSQL/Relational databases, along with exposure to containerization (Docker) and cloud deployments.</li>
<li>Experience building and deploying web applications with a good understanding of cloud fundamentals and scalable coding practices.</li>
<li>A self-starting approach to navigate ambiguous requirements and deliver reliable software.</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Python, Node.js, Next.js, NoSQL/Relational databases, containerization (Docker), cloud deployments, Arabic, cross functional collaboration, LLM solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676600005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>94999453-111</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Partner with public sector clients to scope, collect feedback and implement solutions for complex problems, including spending up to two weeks per month in client offices for feedback and delivery.</li>
<li>Architect production-grade applications that integrate AI models with full-stack frameworks, managing everything from interactive UIs to backend APIs and systems.</li>
<li>Deploy and manage infrastructure within cloud environments, ensuring the highest levels of system integrity, security, scalability, and long-term reliability.</li>
<li>Contribute to core platform features designed to be reused across diverse international client use cases.</li>
<li>Partner with design, product, and data teams to build robust applications aligned with the broader technical architecture.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>5+ years of post-graduation, full-stack engineering experience with demonstrated proficiency in React (required), TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB plus hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</li>
<li>Proven ability to architect scalable, production-grade applications with a strong handle on cloud environments and infrastructure health.</li>
<li>Experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</li>
<li>A self-starting approach with the technical maturity to navigate ambiguous requirements and deliver reliable software.</li>
<li>Driven async communication methodologies to reduce communication frictions</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Past experience working in a forward deployed engineer / dedicated customer engineer role</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676608005</Applyto>
      <Location>Dubai, UAE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44975b06-cb1</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack, AI applications that solve critical challenges and achieve meaningful impact for citizens.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, and hands-on experience with Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>We&#39;re looking for a self-starting approach with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also need to drive async communication methodologies to reduce communication frictions.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673310005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>13667989-d19</externalid>
      <Title>Staff Software Engineer, AI Developer Tooling</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to join our Platform Engineering team. As a key member of our team, you will redefine how engineers develop, build, test, and deploy software at Scale using AI development tools in addition to traditional practices.</p>
<p>In this role, you will:</p>
<ul>
<li>Define next-generation AI development tooling and frameworks using products like Cursor, Claude Code, OpenAI Codex, and MS Copilot, as well as in-house custom-built solutions.</li>
<li>Drive the architecture, design, and implementation of our local development process, build, test, continuous integration, and continuous delivery systems, working closely with stakeholders and internal customers to understand and refine requirements.</li>
<li>Directly mentor software engineers ranging from new grads to experienced engineers.</li>
<li>Proactively identify opportunities and drive improvements to software development practices, processes, tools, and languages.</li>
<li>Present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>8+ years of full-time engineering experience, post-graduation, with experience in build, test, or CI/CD systems.</li>
<li>Extensive experience defining and evangelizing best-practices for AI development tools, including cost guardrails, security frameworks, and hosting knowledge-sharing sessions, among others.</li>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
<li>Experience configuring, testing, and enabling MCP servers, AI agents, and other associated systems.</li>
<li>Show a track record of independent ownership of successful engineering projects.</li>
<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
<li>Experience working fluently with standard infrastructure, containerization, and deployment technologies like Terraform, Docker, Kubernetes, etc.</li>
<li>Experience with modern web frameworks like NodeJS, NextJS, etc.</li>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, Helm, ArgoCD).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>Cursor, Claude Code, OpenAI Codex, MS Copilot, Terraform, Docker, Kubernetes, NodeJS, NextJS, CircleCI, Helm, ArgoCD</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4518088005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45fc6ed2-285</externalid>
      <Title>Senior Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Full-Stack Software Engineer to join our Global Public Sector team. As a forward-deployed engineer, you&#39;ll collaborate directly with public sector counterparts to build full-stack AI applications that solve their most pressing challenges.</p>
<p>Our core work consists of creating custom AI applications, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>You&#39;ll partner with public sector clients to scope, collect feedback, and implement solutions for complex problems. You&#39;ll also architect production-grade applications that integrate AI models with full-stack frameworks, manage infrastructure within cloud environments, and contribute to core platform features.</p>
<p>Ideally, you&#39;ll have a Bachelor&#39;s degree in Computer Science or a related quantitative field, 5+ years of full-stack engineering experience, and proficiency in React, TypeScript, Next.js, Python, Node.js, PostgreSQL or MongoDB, Docker, Kubernetes, and Azure/AWS/GCP.</p>
<p>You&#39;ll be a self-starting individual with technical maturity to navigate ambiguous requirements and deliver reliable software. You&#39;ll also have experience working directly within customer infrastructure to deploy, maintain, and troubleshoot complex, end-to-end solutions.</p>
<p>Nice to have: proficient in Arabic, past experience working in a forward-deployed engineer/dedicated customer engineer role, experience working cross-functionally with operations, and experience building solutions with LLMs and a deep understanding of the overall Gen AI landscape.</p>
<p>Please note that our policy requires a 90-day waiting period before reconsidering candidates for the same role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Next.js, Python, Node.js, PostgreSQL, MongoDB, Docker, Kubernetes, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676606005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>14499a71-fa9</externalid>
      <Title>Software Engineer, Enterprise</Title>
      <Description><![CDATA[<p>At Scale AI, we&#39;re pioneering the next era of enterprise AI. As businesses race to harness the power of Generative AI, Scale is at the forefront, delivering cutting-edge solutions that transform workflows, automate complex processes, and drive unparalleled efficiency for the largest enterprises.</p>
<p>We&#39;re looking for a Backend Engineer to help bring large-scale GenAI systems to production. In this role, you&#39;ll build the core infrastructure that powers AI products for some of the world&#39;s largest enterprises,designing scalable APIs, distributed data systems, and robust deployment pipelines that enable production-grade reliability and performance.</p>
<p>This is a rare opportunity to be at the center of the GenAI revolution, solving hard backend and infrastructure challenges that make AI truly work at enterprise scale. If you&#39;re excited about shaping how AI systems are deployed and scaled in the real world, we want to hear from you.</p>
<p>At Scale, we don&#39;t just follow AI advancements , we lead them. Backed by deep expertise in data, infrastructure, and model deployment, we are uniquely positioned to solve the hardest problems in AI adoption. Join us in shaping the future of enterprise AI, where your work will directly impact how businesses operate, innovate, and grow in the age of GenAI.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and scale backend systems that power enterprise GenAI products, focusing on reliability, performance, and deployment across both Scale&#39;s and customers&#39; infrastructure.</li>
</ul>
<ul>
<li>Develop core services and APIs that integrate AI models and enterprise data sources securely and efficiently, enabling production-scale AI adoption.</li>
</ul>
<ul>
<li>Architect scalable distributed systems for data processing, inference, and orchestration of large-scale GenAI workloads.</li>
</ul>
<ul>
<li>Optimize backend performance for latency, throughput, and cost,ensuring AI applications can operate at enterprise scale across hybrid and multi-cloud environments.</li>
</ul>
<ul>
<li>Manage and evolve cloud infrastructure (AWS, Azure, or GCP), driving automation, observability, and security for large-scale AI deployments.</li>
</ul>
<ul>
<li>Collaborate with ML and product teams to bring cutting-edge GenAI models into production through efficient APIs, model serving systems, and evaluation frameworks.</li>
</ul>
<ul>
<li>Continuously improve reliability and scalability, applying strong engineering practices to make AI systems robust, maintainable, and enterprise-ready.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>4+ years of experience developing large-scale backend or infrastructure systems, with a strong emphasis on distributed services, reliability, and scalability.</li>
</ul>
<ul>
<li>Proficiency in Python or TypeScript, with experience designing high-performance APIs and backend architectures using frameworks such as FastAPI, Flask, Express, or NestJS.</li>
</ul>
<ul>
<li>Deep familiarity with cloud infrastructure (AWS and Azure preferred), including container orchestration (Kubernetes, Docker) and Infrastructure-as-Code tools like Terraform.</li>
</ul>
<ul>
<li>Experience managing data systems such as relational and NoSQL databases (PostgreSQL, DynamoDB, etc.) and building pipelines for data-intensive applications.</li>
</ul>
<ul>
<li>Hands-on experience with GenAI applications, model integration, or AI agent systems,understanding how to deploy, evaluate, and scale AI workloads in production.</li>
</ul>
<ul>
<li>Strong understanding of observability, CI/CD, and security best practices for running services in enterprise or multi-tenant environments.</li>
</ul>
<ul>
<li>Ability to balance rapid iteration with production-grade quality, shipping reliable backend systems in fast-paced environments.</li>
</ul>
<p>Collaborative mindset, working closely with ML, infra, and product teams to bring complex GenAI systems into production at enterprise scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, FastAPI, Flask, Express, NestJS, AWS, Azure, Kubernetes, Docker, Terraform, PostgreSQL, DynamoDB, GenAI, Model Integration, AI Agent Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4536653005</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cc75c6b0-4db</externalid>
      <Title>Machine Learning Fellow - Human Frontier Collective (Canada)</Title>
      <Description><![CDATA[<p>This is a fully remote, 1099 independent contractor opportunity with an estimated duration of six months and the potential for extension.</p>
<p>As an HFC Fellow, you&#39;ll apply your academic and professional expertise to help design, evaluate, and interpret advanced generative AI systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Engaging in high-impact projects with partnered AI labs and platforms</li>
<li>Designing, reviewing, and optimising PyTorch models</li>
<li>Evaluating complex ML code and AI-generated implementations for efficiency and correctness</li>
<li>Advising on GPU optimisation, scaling, and trade-offs</li>
</ul>
<p>You&#39;ll also become part of a supportive, interdisciplinary network of innovators and thought leaders committed to advancing frontier AI across domains.</p>
<p>Collaboration with Scale&#39;s research team to co-author technical reports and research papers is also expected.</p>
<p>To be eligible, candidates must be authorised to work in Canada and have a PhD or postdoctoral degree in Computer Science, Computer Engineering, or a related field.</p>
<p>Professional background as a Machine Learning Engineer or Data Scientist with 1-3+ years of experience is also required.</p>
<p>Strong proficiency in Python and modern ML frameworks (PyTorch, TensorFlow) is essential, along with experience with cloud infrastructure (AWS) and MLOps tools (Docker, Langchain).</p>
<p>A detail-oriented, innovative thinker with a passion in applied AI research and a commitment to collaboration is ideal.</p>
<p>Flexible schedule with 10–40 hour weeks that fit around your life and other commitments is offered.</p>
<p>Project pay rates vary across platforms and are depending on a number of factors, including but not limited to; projects, scope, skillset, and location.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, AWS, Docker, Langchain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Human Frontier Collective</Employername>
      <Employerlogo>https://logos.yubhub.co/humanfrontiercollective.com.png</Employerlogo>
      <Employerdescription>The Human Frontier Collective is a programme that brings together top researchers and domain experts to collaborate on high-impact work in AI.</Employerdescription>
      <Employerwebsite>https://humanfrontiercollective.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4661650005</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>43952002-812</externalid>
      <Title>Software Engineer, AI Developer Tooling</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Software Engineer to join our Platform Engineering team. As a Software Engineer, you will redefine how engineers develop, build, test, and deploy software at Scale using AI development tools in addition to traditional practices. You&#39;ll also get widespread exposure to the forefront of the AI race as Scale sees it in enterprises, startups, governments, and large tech companies.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Defining next-generation AI development tooling and frameworks using products like Cursor, Claude Code, OpenAI Codex, and MS Copilot, as well as in-house custom-built solutions.</li>
<li>Driving the architecture, design, and implementation of our local development process, build, test, continuous integration, and continuous delivery systems, working closely with stakeholders and internal customers to understand and refine requirements.</li>
<li>Directly mentoring software engineers ranging from new grads to experienced engineers.</li>
<li>Proactively identifying opportunities and driving improvements to software development practices, processes, tools, and languages.</li>
<li>Presenting technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation, with experience in build, test, or CI/CD systems.</li>
<li>Extensive experience defining and evangelizing best-practices for AI development tools, including cost guardrails, security frameworks, and hosting knowledge-sharing sessions, among others.</li>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
<li>Experience configuring, testing, and enabling MCP servers, AI agents, and other associated systems.</li>
<li>A track record of independent ownership of successful engineering projects.</li>
<li>Excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
<li>Experience working fluently with standard infrastructure, containerization, and deployment technologies like Terraform, Docker, Kubernetes, etc.</li>
<li>Experience with modern web frameworks like NodeJS, NextJS, etc.</li>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, Helm, ArgoCD).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>software development, distributed systems, public cloud platforms, MCP servers, AI agents, standard infrastructure, containerization, deployment technologies, modern web frameworks, software engineering best practices, CI/CD tooling, Cursor, Claude Code, OpenAI Codex, MS Copilot, Terraform, Docker, Kubernetes, NodeJS, NextJS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676936005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>859cb1cf-b9c</externalid>
      <Title>Senior AI Infrastructure Engineer, Model Serving Platform</Title>
      <Description><![CDATA[<p>As a Senior AI Infrastructure Engineer on the Model Serving Platform team, you will design and build platforms for scalable, reliable, and efficient serving of Large Language Models (LLMs). Our platform powers cutting-edge research and production systems, supporting both internal and external use cases across various environments.</p>
<p>The ideal candidate combines strong ML fundamentals with deep expertise in backend system design. You’ll work in a highly collaborative environment, bridging research and engineering to deliver seamless experiences to our customers and accelerate innovation across the company.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and maintain fault-tolerant, high-performance systems for serving LLM workloads at scale.</li>
<li>Build an internal platform to empower LLM capability discovery.</li>
<li>Collaborate with researchers and engineers to integrate and optimize models for production and research use cases.</li>
<li>Conduct architecture and design reviews to uphold best practices in system design and scalability.</li>
<li>Develop monitoring and observability solutions to ensure system health and performance.</li>
<li>Lead projects end-to-end, from requirements gathering to implementation, in a cross-functional environment.</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>5+ years of experience building large-scale, high-performance backend systems.</li>
<li>Strong programming skills in one or more languages (e.g., Python, Go, Rust, C++).</li>
<li>Experience with LLM serving and routing fundamentals (e.g. rate limiting, token streaming, load balancing, budgets, etc.).</li>
<li>Experience with LLM capabilities and concepts such as reasoning, tool calling, prompt templates, etc.</li>
<li>Experience with containers and orchestration tools (e.g., Docker, Kubernetes).</li>
<li>Familiarity with cloud infrastructure (AWS, GCP) and infrastructure as code (e.g., Terraform).</li>
<li>Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with modern LLM serving frameworks such as vLLM, SGLang, TensorRT-LLM, or text-generation-inference.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Go, Rust, C++, Docker, Kubernetes, AWS, GCP, Terraform, vLLM, SGLang, TensorRT-LLM, text-generation-inference</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4520320005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1f117ca6-268</externalid>
      <Title>Senior Technical Consultant - ElasticSearch</Title>
      <Description><![CDATA[<p>As a Sr. Technical Consultant – Search, you will play a pivotal role in helping our customers realise the value of Elastic&#39;s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>
<p>You&#39;ll collaborate with Elastic&#39;s Professional Services, Engineering, Product, and Sales teams to accelerate adoption of the Elastic Search platform, ensuring customers maximise the value of their data while achieving business outcomes. This is a highly impactful role, with opportunities to guide strategy, lead complex implementations, and mentor both customers and teammates.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Translating business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack</li>
<li>Leading end-to-end delivery of customer engagements – from discovery and design through implementation, enablement, and optimisation</li>
<li>Partnering with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption</li>
<li>Providing technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles</li>
<li>Collaborating cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement</li>
</ul>
<p>The ideal candidate will have 5+ years of experience as a consultant, engineer, or architect with deep expertise in Enterprise Search technologies, including Elasticsearch and related search platforms. They will also have hands-on experience designing and deploying search solutions, proficiency in at least one programming language, and knowledge of distributed search systems and large-scale infrastructure.</p>
<p>The role offers a competitive salary range of $110,900-$175,500 USD, with opportunities for growth and professional development in a dynamic and distributed company.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$110,900-$175,500 USD</Salaryrange>
      <Skills>Elasticsearch, Enterprise Search, Search Architecture, Distributed Search Systems, Large-Scale Infrastructure, Programming Language, Cloud Platforms, Lucene, Databases, Linux, Java, Docker, Kubernetes, DevOps Practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that provides a search and analytics platform for various industries.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7411526</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4daeb1d2-f04</externalid>
      <Title>Senior Software Engineer - Fullstack</Title>
      <Description><![CDATA[<p>We are seeking a senior software engineer to join our team in Vancouver. As a fullstack software engineer, you will work with your team and product management to make insights from data simple. You&#39;ll set the foundation for how we build robust, scalable, and delightful products.</p>
<p>Our customers increasingly use Databricks to analyze petabyte-scale logs in real time. This creates new challenges across the entire data processing pipeline, including ingestion, indexing, processing, and the user experience itself. Our customers are also using Databricks to launch AI/BI, which is redefining Business Intelligence for the AI age. We have several open roles across the teams below:</p>
<ul>
<li>Log Analytics: Our customers increasingly use Databricks to analyze petabyte-scale logs in real time.</li>
<li>AI/BI: AI/BI is redefining Business Intelligence for the AI age.</li>
<li>Unity Catalog Business Semantics: Context is everything for AI. For enterprise data, that context needs to be governed and managed, which is what Unity Catalog Business Semantics offers.</li>
<li>Databricks Apps: Databricks Apps is one of the fastest growing products at Databricks, used by more than 2,500 customers who have created more than 20,000 apps.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of experience with HTML, CSS, and JavaScript.</li>
<li>Passion for user experience and design and a deep understanding of front-end architecture.</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>
<li>Motivated by delivering customer value.</li>
<li>Experience with modern JavaScript frameworks (e.g., React, Angular, or VueJs/Ember).</li>
<li>5+ years of experience with server-side web technologies (eg: Node.js, Java, Python, Scala, C#, C++,Go).</li>
<li>Good knowledge of SQL.</li>
<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, or Kubernetes.</li>
<li>Experience developing large-scale distributed systems.</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. Canada Pay Range $146,200-$201,100 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,200-$201,100 CAD</Salaryrange>
      <Skills>HTML, CSS, JavaScript, Node.js, Java, Python, Scala, C#, C++, Go, SQL, AWS, Azure, GCP, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8099342002</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a6557b2b-d24</externalid>
      <Title>Senior Platform Engineer II, Compute Services</Title>
      <Description><![CDATA[<p>We are seeking a Senior Platform Engineer to join our Kubernetes Infrastructure team. This role involves administering our critical multi-tenant Kubernetes platforms and collaborating with development teams to establish proper deployment architectures.</p>
<p>The ideal candidate will have a strong background in resilient kubernetes application architecture and deployment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Champion reliability initiatives for Kubernetes application deployments: Advocate for best practices to ensure high availability, scalability, and resilience of applications in Kubernetes, focusing on robust testing, secure pipelines, and efficient resource use.</li>
<li>Administer multi-tenant Kubernetes platforms: Manage complex multi-tenant Kubernetes clusters, configuring access, quotas, and security for isolation and optimal resource allocation while upholding SLAs.</li>
<li>Perform lifecycle and day 2 operations on clusters: Execute Kubernetes cluster lifecycle, including provisioning, patching, monitoring, backup, disaster recovery, and troubleshooting.</li>
<li>Deep dive into reliability issues: Conduct in-depth analysis and root cause identification for complex reliability incidents in Kubernetes, utilizing advanced debugging and monitoring tools to propose preventative measures.</li>
<li>Perform on-call duties: Respond to critical alerts and incidents outside business hours, providing timely resolution to minimize disruptions, collaborating with teams, and communicating clearly.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s in CS, Engineering, or related field, or equivalent experience preferred.</li>
<li>CKA or similar certifications is highly desired.</li>
<li>5+ years administering multi-tenant SAAS Kubernetes (EKS, AKS, GKS).</li>
<li>Strong Gitops/Devops with Argocd or similar helm chart management.</li>
<li>Proven Docker and containerization experience.</li>
<li>Strong Linux OS experience.</li>
<li>Proficient in Go.</li>
<li>Excellent problem-solving, debugging, and analytical skills.</li>
<li>Strong communication and collaboration.</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p><strong>Benefits</strong></p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p><strong>Workplace</strong></p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, Gitops/Devops, Argocd, Helm chart management, Docker, Containerization, Linux OS, Go, Problem-solving, Debugging, Analytical skills, Communication, Collaboration, CKA, Performance profiling, Optimization of distributed systems, Network protocols, Distributed consensus algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4607559006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b8cbfe7-a98</externalid>
      <Title>Senior Software Engineer, Echo</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a very specific candidate who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>As a Senior Software Engineer on the Echo team, you will solve unique, large scale, highly complex technical problems, bridging the constraints posed by web-scale applications and blockchain technology.</p>
<p>You will help build the next generation of systems to make cryptocurrency accessible to everyone across the globe, operating real-time applications with high frequency, low latency updates, and managing the most secure, dockerized infrastructure running in the cloud.</p>
<p>The Echo team is responsible for two innovative products in the capital formation space; Echo and Sonar.</p>
<p>We are a small team that operate as a startup within the larger org, and we’re committed to shipping impactful product at a fast pace.</p>
<p>Echo, our marketplace for private investments has facilitated over 300 deals and $150m invested since 2024, and Sonar - our public sales and compliance platform - enables customers to run their own token sales.</p>
<p>Our engineering team works across the whole stack and is empowered to take ownership of large projects.</p>
<p>What you&#39;ll be doing:</p>
<ul>
<li>Build new services to meet critical product and business needs using Golang.</li>
</ul>
<ul>
<li>Design scalable systems to solve novel problems with modern cloud technology and industry best practices.</li>
</ul>
<ul>
<li>Articulate a long term vision for maintaining and scaling our backend systems and the teams running them.</li>
</ul>
<ul>
<li>Work with engineers, designers, product managers and senior leadership to turn our product and technical vision into a tangible roadmap every quarter.</li>
</ul>
<ul>
<li>Write high quality, well tested code to meet the needs of your customers.</li>
</ul>
<p>What we look for in you:</p>
<ul>
<li>You have at least 5 years of experience in software engineering.</li>
</ul>
<ul>
<li>You’ve designed, built, scaled and maintained production services, and know how to compose a service oriented architecture.</li>
</ul>
<ul>
<li>You write high quality, well tested code to meet the needs of your customers.</li>
</ul>
<ul>
<li>You’re passionate about building an open financial system that brings the world together.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>You have gone through a rapid growth in your company (from startup to mid-size).</li>
</ul>
<ul>
<li>Experience with growth experiments or A/B testing frameworks.</li>
</ul>
<ul>
<li>You have experience with Blockchain technology (such as Bitcoin, Ethereum etc..)</li>
</ul>
<ul>
<li>You have experience decomposing a large monolith into microservices.</li>
</ul>
<ul>
<li>You’ve worked with Golang, Ruby, Docker, Rails, Postgres, MongoDB or DynamoDB.</li>
</ul>
<ul>
<li>You’ve built financial, high reliability or security systems.</li>
</ul>
<p>Job #: (GB-CFBE05UK-Q126)</p>
<p>#LI-Remote</p>
<p>Pay Transparency Notice: The target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, and vision).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>£122,400-£136,000 GBP</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>£122,400-£136,000 GBP</Salaryrange>
      <Skills>Golang, Cloud technology, Service-oriented architecture, Blockchain technology, Generative AI tools and copilots, Ruby, Docker, Rails, Postgres, MongoDB, DynamoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a digital currency exchange and wallet service that allows users to buy, sell, and store cryptocurrencies such as Bitcoin, Ethereum, and Litecoin.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7569402</Applyto>
      <Location>Remote - UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ded9d7ff-8aa</externalid>
      <Title>Senior Engineering Manager, Data Streaming Services (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human\n\nIdentity is the key to unlocking the potential of AI. As a Senior Engineering Manager, Data Streaming Services at Auth0, you will lead the evolution of our streaming data backbone across a multi-cloud footprint. You will oversee multiple engineering teams dedicated to making data streaming seamless, reliable, and high-performance.\n\nThis is a &quot;manager of managers&quot; role requiring a blend of strategic foresight, execution rigor, and technical grit. You will set the vision for our streaming services, mentor high-performing teams, and take accountability for our service uptime guarantees.\n\n<strong>Key Responsibilities:</strong>\n\n<em> Lead a world-class team of teams. Oversee data streaming infrastructure and services that power our global platform across AWS and Azure.\n</em> Own roadmap and execution. Partner with product and stakeholder teams to define the team&#39;s strategy and prioritized roadmap.\n<em> Drive engineering excellence. Set high standards of quality, reliability, and operational robustness, championing best practices in software development, from code reviews to observability and incident management.\n</em> Lead an automation-first culture. Reduce operational friction and ensure infrastructure is self-healing and code-defined. Draw efficiency from AI-assisted development.\n<em> Act as a technical leader. Lead response on incidents for services under ownership and help teams navigate complex distributed systems failures.\n\n<strong>Requirements:</strong>\n\n</em> Proven engineering leadership, building and leading teams of teams. Experience coaching Staff+ engineers and engineering managers.\n<em> Strong technical and architectural acumen. Background in building scalable, distributed systems. Comfortable participating in and guiding technical discussions.\n</em> Strong project management skills. Expertise in creating technical roadmaps, prioritizing effectively in an agile environment, and managing complex project dependencies.\n<em> Collaborative leadership style, adapted to remote ways of working. Excellent written and verbal communication skills to build strong relationships with stakeholders and inspire others.\n\n<strong>Bonus Points:</strong>\n\n</em> Experience developing data-intensive applications in a modern programming language such as go, node.js, or Java.\n<em> Experience with databases such as PostgreSQL and MongoDB.\n</em> Experience with distributed streaming platforms like Kafka.\n<em> Familiarity with concepts in the IAM (Identity and Access Management) domain.\n</em> Experience with cloud providers (AWS, Azure), container technologies such as Kubernetes and Docker, and observability tools such as Datadog.\n* Experience building reliable, high-availability platforms for enterprise SaaS applications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$207,000-$284,000 USD</Salaryrange>
      <Skills>engineering leadership, technical and architectural acumen, project management skills, collaborative leadership style, data-intensive applications, databases, distributed streaming platforms, IAM domain, cloud providers, container technologies, observability tools, go, node.js, Java, PostgreSQL, MongoDB, Kafka, AWS, Azure, Kubernetes, Docker, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 provides identity and authentication services for thousands of customers and millions of users.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7719329</Applyto>
      <Location>Chicago, Illinois; New York, New York; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5196c4ac-d97</externalid>
      <Title>Senior Software Engineer - Infrastructure and Tools</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Infrastructure teams. As a key member of our team, you will build scalable systems to power the Databricks platform, making it the de-facto platform for running Big Data and AI workloads.</p>
<p>Your responsibilities will include building and extending components of the core Databricks infrastructure, architecting multi-cloud systems and abstractions to allow the Databricks product to run on top of existing Cloud providers, improving software development workflows for engineering and operational efficiency, using our own data and AI platform to analyze build and test logs and metrics to identify areas for improvement, developing automated build, test, and release infrastructures, and setting and upholding the standard for engineering processes to support high-quality engineering.</p>
<p>To succeed in this role, you will need a BS (or higher) in Computer Science, or a related field, and 5+ years of experience writing production code in one of Java, Scala, Go, C++, or Python. You should also have passion for building highly scalable and reliable infrastructure, experience architecting, developing, and deploying large-scale distributed systems at scale, and experience with cloud APIs and cloud technologies such as AWS, Azure, GCP, Docker, Kubernetes, or Terraform.</p>
<p>In addition to a competitive salary, we offer comprehensive health coverage, 401(k) plan, equity awards, flexible time off, paid parental leave, family planning, gym reimbursement, annual personal development fund, work headphones reimbursement, employee assistance program, and business travel accident insurance.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>Java, Scala, Go, C++, Python, Cloud APIs, Cloud technologies, AWS, Azure, GCP, Docker, Kubernetes, Terraform</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6318503002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae6df2c2-eb1</externalid>
      <Title>DevOps Engineer, Infrastructure &amp; Security</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, Infrastructure &amp; Security at Scale, you will play a crucial role in building out and enhancing our CI/CD pipelines. Our product portfolio and customer base are expanding, and we need skilled engineers to streamline our Software Development Life Cycle (SDLC) through collaborative efforts.</p>
<p>You will design, develop, and maintain robust CI/CD pipelines to automate the deployment of our lowside and highside products. You will collaborate closely with product and engineering teams to enhance existing application code for improved compatibility and streamlined integration within automated pipelines.</p>
<p>Contribute to the overall architecture and design of our deployment systems, bringing new ideas to life for increased efficiency and reliability. Troubleshoot and resolve complex deployment issues, ensuring minimal disruption to development cycles.</p>
<p>Develop a deep understanding of our product and ML architectures to facilitate seamless integration and deployment. Document pipeline processes and configurations to ensure maintainability and knowledge transfer.</p>
<p>Proactively incorporate security best practices into all stages of the CI/CD pipeline, building security into our development processes. Drive standardization and foster collaboration across different product teams to achieve a unified and efficient SDLC.</p>
<p>We are looking for experienced DevOps Engineers, DevSecOps Engineers, Software Engineers with a strong focus on CI/CD, or a similar role. You should have a proven track record of building or significantly enhancing CI/CD pipelines.</p>
<p>Experience configuring and adapting application code to integrate seamlessly with evolving CI/CD environments is a plus. Familiarity with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. is also required.</p>
<p>We offer a competitive salary range of $245,600-$307,000 USD, comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$245,600-$307,000 USD</Salaryrange>
      <Skills>CI/CD, Kubernetes, Terraform, Docker, Python, Bash, PowerShell, Jenkins, GitLab CI, GitHub Actions, Azure DevOps, AWS, Azure, GCP, Security best practices, Containerization technologies, Machine learning lifecycles, MLOps concepts, Prior experience in classified environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674863005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bfddfcc3-e38</externalid>
      <Title>Senior Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will lead the development of a vertical feature or a horizontal capability to include defining requirements with stakeholders and implementation until it is accepted by the stakeholders.</p>
<p>You will:</p>
<p>Lead the design and implementation of scalable backend systems and distributed architectures for Federal customers. Manage the full lifecycle of feature development from requirement definition to deployment on classified networks. Direct the orchestration of asynchronous agent fleets to meet mission requirements. Lead customer engagements to translate mission needs into technical requirements. Own the communication with stakeholders to ensure implementation meets defined acceptance criteria. Conduct technical reviews and identify risks within machine learning infrastructure and model serving. Drive the platform roadmap by providing technical specifications for Federal product offerings.</p>
<p>Ideally you will have:</p>
<p>Full Stack Development: Proficiency in front-end, back-end development and infrastructure, including experience with modern web development frameworks, programming languages, and databases Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles AI Application Integration: Familiarity with integrating Large Language Models (LLMs) and building agentic workflows. Understanding of prompt engineering, retrieval-augmented generation (RAG), and agent orchestration is beneficial. Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to defining and evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$311,000 USD (San Francisco, New York, Seattle) $194,400-$279,000 USD (Hawaii, Washington DC, Texas, Colorado) $162,400-$233,000 USD (St. Louis)</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Docker, Kubernetes, AWS, Azure, GCP, ETL, data modeling, data warehousing, data governance, Large Language Models, prompt engineering, retrieval-augmented generation, agent orchestration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674911005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9cb24149-c62</externalid>
      <Title>Principal Software Engineer, Productivity</Title>
      <Description><![CDATA[<p>We are looking for a Principal-level engineer who is passionate about building and evolving the developer productivity ecosystem used by the entire Workflows Engineering organisation.</p>
<p>As a productivity engineer, you&#39;ll work with both our Engineering and Site Reliability teams, owning our developer CLI (Golang) and Kubernetes tooling, automated release processes, and CI/CD systems in CircleCI.</p>
<p>Job Duties and Responsibilities:</p>
<ul>
<li>Collaborate with the SRE and Engineering teams to manage, extend, and enhance existing developer productivity and platform tooling for local and remote Kubernetes environments</li>
<li>Own and optimise CI/CD pipelines in CircleCI</li>
<li>Assist in weekly release orchestration</li>
<li>Automate and improve processes via Golang tooling and Okta Workflows</li>
</ul>
<p>Minimum Required Knowledge, Skills, and Abilities:</p>
<ul>
<li>10+ years of deep understanding of software engineering processes, agile framework, tools (e.g.: programming proficiency in a language, preferably Go or similar compiled language), methods, test development, algorithms, and data structures</li>
<li>Experience with Cloud Native Technologies (Kubernetes, ArgoCD, Crossplane, Docker)</li>
<li>Passionate about learning new technical ecosystems</li>
<li>Interested in working with container deployment and orchestration technologies at scale, with familiarity of the fundamentals to include service discovery, deployments, monitoring, scheduling, and load balancing</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with CI/CD Systems (such as CircleCI or Github Actions)</li>
<li>Experience with development and deployment in a hosted cloud environment, preferably AWS</li>
</ul>
<p>Education and Training:</p>
<p>BS, MS, or PhD in Computer Science or related field</p>
<p>The annual base salary range for this position for candidates located in Canada is between $177,000-$265,000 CAD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$177,000-$265,000 CAD</Salaryrange>
      <Skills>software engineering processes, agile framework, Go, Kubernetes, ArgoCD, Crossplane, Docker, CI/CD Systems, development and deployment in a hosted cloud environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7361555</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10836c16-e0c</externalid>
      <Title>Senior Staff Operations Engineer, AIOps</Title>
      <Description><![CDATA[<p>Job Title: Senior Staff Operations Engineer, AIOps</p>
<p>Join the BizTech team at Airbnb and contribute to fostering culture and connection at the company by providing reliable corporate tools, innovative products, and technical support for all teams.</p>
<p>As a Senior Staff Engineer in Operations, you will lead and mentor a high-performing team to scale our AI-enabled operations model and deliver AIOps solutions that streamline operational workstreams and help BizTech teams focus on their core work with confidence.</p>
<p>Your scope includes leading projects across multiple products and platforms, delivering world-class outcomes that create customer and community value while balancing near- and long-term needs.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead technical strategy and discussions, partnering with Operations peers and cross-functional BizTech teams to build AIOps and automation solutions.</li>
</ul>
<ul>
<li>Stay on top of tasks, engagements, and team interactions,active collaboration is key to success.</li>
</ul>
<ul>
<li>Work in sprints, delivering project work across coding, testing, design, documentation, and operational readiness reviews.</li>
</ul>
<ul>
<li>Dedicate part of each day to core Operations work, triaging tickets, spotting patterns, and driving scalable fixes that improve efficiency.</li>
</ul>
<ul>
<li>Participate in an on-call rotation, leading high-severity incident response as both incident commander and operations engineer.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>15+ years of experience across AIOps, data catalog architecture, product development, and/or Technical Operations infrastructure.</li>
</ul>
<ul>
<li>Strong SDLC experience, including infrastructure as code, configuration management, distributed version control, and CI/CD.</li>
</ul>
<ul>
<li>Deep expertise in complex enterprise infrastructure, especially cloud (AWS and/or Google), with a focus on AI/automation, data catalog architecture, workflows, and correlation.</li>
</ul>
<ul>
<li>Solid understanding of corporate infrastructure and applications to translate into AIOps requirements and integrations.</li>
</ul>
<ul>
<li>Proven ability to lead cross-team, cross-org delivery of large-scale, technically complex, ambiguous initiatives that anticipate business needs.</li>
</ul>
<ul>
<li>Proficient in Python or Go.</li>
</ul>
<ul>
<li>Experience building API integrations and event-driven architectures (e.g., AWS Lambda/SQS).</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with cloud-based infrastructure and services.</li>
</ul>
<ul>
<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps practices and tools (e.g., Jenkins, GitLab).</li>
</ul>
<ul>
<li>Experience with agile development methodologies and frameworks (e.g., Scrum, Kanban).</li>
</ul>
<ul>
<li>Strong communication and interpersonal skills.</li>
</ul>
<ul>
<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>
</ul>
<p>Salary: $212,000-$265,000 USD per year.</p>
<p>Benefits: Bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Workplace Type: Remote eligible.</p>
<p>Experience Level: Senior.</p>
<p>Employment Type: Full-time.</p>
<p>Category: Engineering.</p>
<p>Industry: Technology.</p>
<p>Required Skills: AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, data catalog architecture, workflows, and correlation.</p>
<p>Preferred Skills: Cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$212,000-$265,000 USD per year</Salaryrange>
      <Skills>AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, workflows, correlation, cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most popular travel platforms in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7644921</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>90423d85-ea7</externalid>
      <Title>Senior Software Engineer - Fullstack</Title>
      <Description><![CDATA[<p>As a Full Stack software engineer, you will work with your team and product management to make insights from data simple. We are looking for engineers that are customer obsessed, who can take on the full scope of the product and user experience beyond the technical implementation. You&#39;ll set the foundation for how we build robust, scalable and delightful products.</p>
<p>Some example experiences you&#39;ll create for our customers to achieve the full project lifecycle from loading data, visualizing results, creating statistical models, and deploying as production artifacts include:</p>
<ul>
<li>Simple workflows to create, configure, and manage large-scale compute clusters, networks and data sources.</li>
<li>Create, deploy, test, and upgrade complex data pipelines with powerful features to visualize data graphs.</li>
<li>Seamless onboarding and management for all members of an organisation to become data-driven.</li>
<li>Provide a great SQL-centric data exploration and dashboarding experience on Databricks.</li>
<li>An interactive environment for collaborative data projects at massive scale with an easy path to production.</li>
</ul>
<p>We are looking for engineers with 5+ years of experience with HTML, CSS, and JavaScript, passion for user experience and design, and a deep understanding of front-end architecture. You should be comfortable working towards a multi-year vision with incremental deliverables, motivated by delivering customer value, and experienced with modern JavaScript frameworks and server-side web technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>HTML, CSS, JavaScript, SQL, Cloud technologies (AWS, Azure, GCP, Docker, or Kubernetes), Modern JavaScript frameworks (React, Angular, or VueJs/Ember), Server-side web technologies (Node.js, Java, Python, Scala, C#, C++, Go)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best Data Intelligence Platform, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5445641002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fe04c8cc-782</externalid>
      <Title>Forward Deployed Engineering Manager</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<p>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</p>
<p>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</p>
<p>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</p>
<p>Why Join Us</p>
<p>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</p>
<p>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</p>
<p>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</p>
<p>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</p>
<p>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</p>
<p>The role</p>
<p>We’re hiring a Forward Deployed Engineering Manager to lead the design, development, and delivery of reinforcement learning environments for agentic AI systems.</p>
<p>You’ll manage a team responsible for building sandboxed, reproducible environments,terminal-based workflows, browser automation, and computer-use simulations,that power both model training and human-in-the-loop evaluation. This is a hands-on leadership role where you’ll set technical direction, guide execution, and stay close to architecture and critical systems.</p>
<p>What You’ll Do</p>
<p>Lead, hire, and develop a high-performing team of Forward Deployed Engineers, setting a high bar for ownership, velocity, and technical quality</p>
<p>Own the RL environment roadmap, aligning team execution with customer needs and evolving model capabilities</p>
<p>Oversee development of sandboxed environments (terminal, browser, tool-augmented workspaces) that support deterministic execution and multi-step agent interaction</p>
<p>Ensure reliability, observability, and data integrity through strong instrumentation (logging, trajectory capture, state snapshotting)</p>
<p>Drive infrastructure excellence across containerization, sandboxing, CI/CD, automated testing, and monitoring</p>
<p>Partner cross-functionally with data operations, product, and leading AI labs to define task design, evaluation protocols, and environment requirements</p>
<p>Enable rapid prototyping and iteration, helping the team move from ambiguous requirements to production-ready systems quickly</p>
<p>Stay close to the technical details,reviewing architecture, unblocking complex issues, and guiding design decisions</p>
<p>What We’re Looking For</p>
<p>5+ years of software engineering experience (Python)</p>
<p>2+ years of experience managing or leading engineers in fast-paced environments</p>
<p>Strong experience with containerization and sandboxing (Docker, Firecracker, or similar)</p>
<p>Solid understanding of reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces)</p>
<p>Background in infrastructure, developer tooling, or distributed systems</p>
<p>Strong debugging skills and systems thinking across layered, containerized environments</p>
<p>Ability to operate in ambiguity and translate loosely defined problems into clear execution plans</p>
<p>Excellent communication and stakeholder management skills</p>
<p>Preferred</p>
<p>Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench)</p>
<p>Familiarity with cloud infrastructure (GCP or AWS)</p>
<p>Prior experience in AI/ML platforms, data companies, or research environments</p>
<p>Contributions to open-source projects in RL, agents, or developer tooling</p>
<p>Why This Role Matters</p>
<p>RL environment quality is a critical bottleneck in advancing agentic AI. Poorly designed or unreliable environments introduce noise into training loops and directly impact model performance.</p>
<p>In this role, you’ll lead the team building the environments that define how models learn,working across a range of cutting-edge projects with leading AI labs. Alignerr offers the speed and ownership of a startup with the scale and resources of Labelbox, giving you the opportunity to have outsized impact on the future of AI.</p>
<p>About Alignerr</p>
<p>Alignerr is Labelbox’s human data organization, powering next-generation AI through high-quality training data, reinforcement learning environments, and evaluation systems. We partner directly with leading AI labs to build the data and infrastructure that push model capabilities forward.</p>
<p>Life at Labelbox</p>
<p>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</p>
<p>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</p>
<p>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</p>
<p>Growth: Career advancement opportunities directly tied to your impact</p>
<p>Vision: Be part of building the foundation for humanity&#39;s most transformative technology</p>
<p>Our Vision</p>
<p>We believe data will remain crucial in achieving artificial general intelligence. As AI models become more sophisticated, the need for high-quality, specialized training data will only grow. Join us in developing new products and services that enable the next generation of AI breakthroughs.</p>
<p>Labelbox is backed by leading investors including SoftBank, Andreessen Horowitz, B Capital, Gradient Ventures, Databricks Ventures, and Kleiner Perkins. Our customers include Fortune 500 enterprises and leading AI labs.</p>
<p>Any emails from Labelbox team members will originate from a @labelbox.com email address. If you encounter anything that raises suspicions during your interactions, we encourage you to exercise caution and suspend or discontinue communications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$220,000 USD</Salaryrange>
      <Skills>Software engineering experience (Python), Containerization and sandboxing (Docker, Firecracker, or similar), Reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces), Infrastructure, developer tooling, or distributed systems, Debugging skills and systems thinking, Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench), Familiarity with cloud infrastructure (GCP or AWS), Prior experience in AI/ML platforms, data companies, or research environments, Contributions to open-source projects in RL, agents, or developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a data-centric AI development company that provides critical infrastructure for breakthrough AI models.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5101195007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4119a38f-6e7</externalid>
      <Title>Machine Learning Fellow - Human Frontier Collective (US)</Title>
      <Description><![CDATA[<p>This is a fully remote, 1099 independent contractor opportunity with an estimated duration of six months and the potential for extension.</p>
<p>As an HFC Fellow, you&#39;ll apply your academic and professional expertise to help design, evaluate, and interpret advanced generative AI systems,while gaining exposure to cutting-edge research and working alongside an interdisciplinary network of leading thinkers.</p>
<p>You&#39;ll get invited to engage in high-impact projects with our partnered AI labs and platforms, helping models understand real-world deep learning workflows by designing, reviewing, and optimising PyTorch models, evaluating complex ML code and AI-generated implementations for efficiency and correctness, and advising on GPU optimisation, scaling, and trade-offs.</p>
<p>Beyond the work, you&#39;ll become part of a supportive, interdisciplinary network of innovators and thought leaders committed to advancing frontier AI across domains.</p>
<p>You&#39;ll also contribute to research publications, collaborating with Scale&#39;s research team to co-author technical reports and research papers,boosting your academic visibility and professional recognition.</p>
<p>We&#39;re looking for individuals with a PhD or postdoctoral degree in Computer Science, Computer Engineering, or a related field, with 1-3+ years of experience as a Machine Learning Engineer or Data Scientist.</p>
<p>Key skills include strong proficiency in Python and modern ML frameworks (PyTorch, TensorFlow), experience with cloud infrastructure (AWS) and MLOps tools (Docker, Langchain), and a detail-oriented, innovative mindset with a passion in applied AI research and a commitment to collaboration.</p>
<p>Benefits include professional development, joining a top-tier network, flexible scheduling, and competitive pay.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, AWS, Docker, Langchain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Human Frontier Collective</Employername>
      <Employerlogo>https://logos.yubhub.co/humanfrontiercollective.com.png</Employerlogo>
      <Employerdescription>The Human Frontier Collective is a research-focused organisation that brings together top researchers and domain experts to collaborate on high-impact work in AI.</Employerdescription>
      <Employerwebsite>https://humanfrontiercollective.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4660340005</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>763156b0-8f1</externalid>
      <Title>Machine Learning Fellow - Human Frontier Collective (UK)</Title>
      <Description><![CDATA[<p>This is a fully remote, 1099 independent contractor opportunity with an estimated duration of six months and the potential for extension.</p>
<p>As an HFC Fellow, you&#39;ll apply your academic and professional expertise to help design, evaluate, and interpret advanced generative AI systems,while gaining exposure to cutting-edge research and working alongside an interdisciplinary network of leading thinkers.</p>
<p>Responsibilities:</p>
<ul>
<li>Engage in high-impact projects with our partnered AI labs and platforms.</li>
<li>Design, review, and optimize PyTorch models.</li>
<li>Evaluate complex ML code and AI-generated implementations for efficiency and correctness.</li>
<li>Advise on GPU optimization, scaling, and trade-offs.</li>
<li>Collaborate with Scale&#39;s research team to co-author technical reports and research papers.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>PhD or postdoctoral degree in Computer Science, Computer Engineering, or a related field.</li>
<li>1-3+ years of experience as a Machine Learning Engineer or Data Scientist.</li>
<li>Strong proficiency in Python and modern ML frameworks (PyTorch, TensorFlow).</li>
<li>Experience with cloud infrastructure (AWS) and MLOps tools (Docker, Langchain) is a plus.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Flexible schedule with 10–40 hour weeks.</li>
<li>Competitive pay rates varying across platforms and depending on project scope, skillset, and location.</li>
<li>Opportunity to work with a global network of engineers and experts to advance responsible AI.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, AWS, Docker, Langchain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Human Frontier Collective</Employername>
      <Employerlogo>https://logos.yubhub.co/humanfrontiercollective.com.png</Employerlogo>
      <Employerdescription>The Human Frontier Collective is a research and development organization that focuses on advancing the field of artificial intelligence.</Employerdescription>
      <Employerwebsite>https://humanfrontiercollective.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4661647005</Applyto>
      <Location>United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0796e182-42e</externalid>
      <Title>Sr. Software Engineer, Backend (Search Platform)</Title>
      <Description><![CDATA[<p>About Dialpad</p>
<p>Dialpad is the AI-native business communications platform. We unify calling, messaging, meetings, and contact center on a single platform - powered by AI that understands every conversation in real time.</p>
<p>More than 70,000 companies around the globe, including WeWork, Asana, NASDAQ, AAA Insurance, COMPASS Realty, Uber, Randstad, and Tractor Supply, rely on Dialpad to build stronger customer connections using real-time, AI-driven insights.</p>
<p>We’re now leading the shift to Agentic AI: intelligent agents that don’t just analyze conversations but take action by automating workflows, resolving customer issues, and accelerating revenue in real time.</p>
<p>Our DAART initiative (Dialpad Agentic AI in Real Time) is redefining what a communications platform can do.</p>
<p>Visit dialpad.com to learn more.</p>
<p>Being a Dialer</p>
<p>At Dialpad, AI isn’t just a feature; it’s how our teams do their best work every day. We put powerful AI tools in every employee’s hands so they can move faster, think bigger, and achieve more.</p>
<p>We believe every conversation matters. And we’ve built the platform that turns those conversations into insight and action, for our customers and ourselves.</p>
<p>We look for people who are intensely curious and hold themselves to a high bar. Our ambition is significant, and achieving it requires a team that operates at the highest level.</p>
<p>We seek individuals who embody our core traits: Scrappy, Curious, Optimistic, Persistent, and Empathetic.</p>
<p>Your role</p>
<p>Dialpad’s Product Engineering organization is responsible for building and maintaining the customer-facing features at scale across all of our cloud-native products and services.</p>
<p>Every day, millions of users across the world leverage our technology for communicating effectively and efficiently.</p>
<p>Every engineer on our global engineering team is given the opportunity to take ownership of a large portion of the product where they’re able to see immediate results.</p>
<p>Combining natural language processing and artificial intelligence with world-class cloud computing, the things you’ll create at Dialpad will shape the future of work,enabling companies to work from anywhere and making business communication more human.</p>
<p>Dialpad’s Analytics team owns data pipelines, multiple databases, a modular query layer, and rich FE components to deliver intuitive and powerful end-user-facing analytics experiences that allow Dialpad customers to make data-driven business decisions.</p>
<p>Our teams are highly collaborative and comprise cross-disciplinary professionals, including Product Managers, Designers, QA specialists, as well as Engineers specialising in Data Engineering, Data Science, and Telephony.</p>
<p>This position reports to the Engineering Manager, who is based in Bengaluru, and the role will be based in our Bengaluru, India Office.</p>
<p>The position will require a hybrid working arrangement based out of our Bengaluru office.</p>
<p><strong>What you’ll do</strong></p>
<ul>
<li>Contribute to the design, development, and maintenance of information retrieval and distributed systems.</li>
</ul>
<ul>
<li>Build and optimize search engines, including indexers, analyzers, ranking, and re-ranking strategies.</li>
</ul>
<ul>
<li>Work on hybrid search techniques, including dense vector manipulation, rank fusion, and reranking.</li>
</ul>
<ul>
<li>Maintain and enhance highly scalable search platforms with a focus on performance and cost efficiency.</li>
</ul>
<ul>
<li>Ensure high availability, reliability, and fault tolerance in search services.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to translate business requirements into technical solutions.</li>
</ul>
<ul>
<li>Develop and optimize real-time distributed systems, microservices, and message-driven architectures.</li>
</ul>
<ul>
<li>Implement and maintain monitoring, alerting, and performance metrics for platform reliability.</li>
</ul>
<ul>
<li>Evaluate and integrate emerging technologies to improve search capabilities.</li>
</ul>
<ul>
<li>Write clean, modular, and well-tested code while following best engineering practices.</li>
</ul>
<ul>
<li>Participate in code reviews to ensure quality, maintainability, and scalability.</li>
</ul>
<ul>
<li>Provide mentorship and technical guidance to junior engineers.</li>
</ul>
<p><strong>Skills you’ll bring</strong></p>
<ul>
<li>4-7 years of experience in information retrieval or distributed systems engineering.</li>
</ul>
<ul>
<li>Strong understanding of search platforms and experience maintaining search engines at scale.</li>
</ul>
<ul>
<li>Deep knowledge of indexers, analyzers, field mapping, and ranking techniques.</li>
</ul>
<ul>
<li>Experience with NLP/NLU within the context of information retrieval.</li>
</ul>
<ul>
<li>Expertise in dense vector manipulation and optimization.</li>
</ul>
<ul>
<li>Familiarity with hybrid search, rank fusion, and reranking techniques.</li>
</ul>
<ul>
<li>Proficiency in Go and Python 3 (experience with Rust or TypeScript is a plus).</li>
</ul>
<ul>
<li>Strong understanding of distributed systems, microservices, and message-driven architectures.</li>
</ul>
<ul>
<li>Passion for real-time performance optimization and high availability.</li>
</ul>
<ul>
<li>Experience with API design using Swagger, OpenAPI, or equivalent tools.</li>
</ul>
<ul>
<li>Knowledge of gRPC or equivalent RPC protocols.</li>
</ul>
<ul>
<li>Experience with Docker and Kubernetes for containerized deployments.</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (GCP preferred, AWS/Azure optional).</li>
</ul>
<ul>
<li>Hands-on experience with Infrastructure as Code tools like Terraform or Ansible.</li>
</ul>
<ul>
<li>Knowledge of CI/CD frameworks and continuous delivery practices.</li>
</ul>
<p>Why Join Dialpad</p>
<ul>
<li>Work at the center of the AI transformation in business communications</li>
</ul>
<ul>
<li>Build and ship agentic AI products that are redefining how companies operate</li>
</ul>
<ul>
<li>Join a team where AI amplifies every employee’s impact</li>
</ul>
<ul>
<li>Competitive salary, comprehensive benefits, and real opportunities for growth</li>
</ul>
<p>We believe in investing in our people. Dialpad offers competitive benefits and perks, cutting-edge AI tools, and a robust training program that help you reach your full potential.</p>
<p>We have designed our offices to be inclusive, offering a vibrant environment to cultivate collaboration and connection.</p>
<p>Our exceptional culture, repeatedly recognized as a Great Place to Work, ensures that every employee feels valued and empowered to contribute to our collective success.</p>
<p>Don’t meet every single requirement? If you’re excited about this role and possess the fundamental traits, drive, and strong ambition we seek, but your experience doesn’t meet every qualification, we encourage you to apply.</p>
<p>Dialpad is an equal-opportunity employer. We are dedicated to creating a community of inclusion and an environment free from discrimination or harassment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>information retrieval, distributed systems engineering, search platforms, indexers, analyzers, field mapping, ranking techniques, NLP/NLU, dense vector manipulation, optimization, hybrid search, rank fusion, reranking, Go, Python 3, API design, gRPC, Docker, Kubernetes, cloud platforms, Infrastructure as Code, CI/CD frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dialpad</Employername>
      <Employerlogo>https://logos.yubhub.co/dialpad.com.png</Employerlogo>
      <Employerdescription>Dialpad is an AI-native business communications platform that unifies calling, messaging, meetings, and contact center on a single platform.</Employerdescription>
      <Employerwebsite>https://dialpad.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dialpad/jobs/8340906002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86dc459d-a0f</externalid>
      <Title>Senior Software Engineer, Platform as a Service</Title>
      <Description><![CDATA[<p>We are seeking a technical, hands-on, empathetic senior software engineer to help define and deliver our Platform as a Service (PAAS) mission. As a senior engineer on the PAAS team, you will collaborate with the team to deliver forward-looking, customer-centric tooling. Your expertise in building and using best-in-class infrastructure tools will equip our engineering organisation with tools to move quickly and deliver features that bring millions of people together.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Working with customer engineering teams to ensure we’re building solutions that developers love using day-in and day-out</li>
<li>Collaborating with the Internal Development Experience (IDX) team to ensure an easy path to go from development through staging into production</li>
<li>Working with the Platform Security team in order to secure every path to production</li>
<li>Shipping Rust code to YAY, our in-house deployment tooling built around Google Kubernetes Engine and Temporal</li>
<li>Exposing the full flexibility of Kubernetes for users while abstracting the complexities away</li>
<li>Building tools to manage the configuration, observability, and scaling characteristics of our infrastructure</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years of experience in software development with a focus on tooling, infrastructure, and automation</li>
<li>Experience working in multi-milestone and even multi-quarter projects</li>
<li>Expertise and empathy when troubleshooting issues with customer engineering teams</li>
<li>Expertise using and building upon the primitives of standard cloud infrastructure tooling like Kubernetes, Docker</li>
<li>Experience developing in cloud-based environments (we use Google Cloud; knowledge of Amazon Web Services and/or Azure also great!)</li>
<li>Experience with infrastructure-as-code tooling (we use Terraform)</li>
</ul>
<p>Bonus points for experience with CI, build, and deployment technologies like Buildkite, Bazel, and Terraform, as well as cloud networking tools like istio, envoy, etc. and application observability tools like Datadog and/or Sentry.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$196,000 to $220,500 + equity + benefits</Salaryrange>
      <Skills>Rust, Kubernetes, Docker, Terraform, Google Cloud, Amazon Web Services, Azure, CI/CD, infrastructure-as-code, Buildkite, Bazel, istio, envoy, Datadog, Sentry</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a communication platform used by over 200 million people every month for various purposes, with a strong focus on gaming.</Employerdescription>
      <Employerwebsite>https://discord.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8409021002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f95ac4b6-a7c</externalid>
      <Title>Software Engineer - Delivery Platform</Title>
      <Description><![CDATA[<p>At Squarespace, we&#39;re reimagining how people bring their ideas to life online. Our Infrastructure Engineering teams are at the heart of that mission --- building the platforms and tooling that let every engineer ship with speed and confidence.</p>
<p>As a Software Engineer on the Delivery team, you&#39;ll work on the systems that sit between GitHub and production. These systems include nearly every Squarespace service, such as CI/CD pipelines, GitOps workflows, and the deployment platform that spans our Kubernetes clusters and regions. If you&#39;re passionate about developer experience, modern deployment tooling, and making other engineers more productive, we want to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and evolve the platform that ships Squarespace services to production --- CI/CD pipelines, GitOps workflows, and deployment tooling across Kubernetes clusters.</li>
<li>Increase adoption of modern deployment tooling across high-traffic services</li>
<li>Design reusable Helm charts, GitOps templates, and standardized rollout/rollback patterns for engineering teams.</li>
<li>Identify improvements to CI pipeline performance and reliability across the organization.</li>
<li>Contribute to AI-assisted delivery tooling that helps engineers self-serve and diagnose build failures.</li>
<li>Develop technical documentation to ensure knowledge sharing and reusability.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of backend or platform engineering experience.</li>
<li>Experience building or improving CI/CD pipelines (e.g., Drone, Jenkins, GitHub Actions, Harness).</li>
<li>Knowledge of Docker and Kubernetes.</li>
<li>Familiarity with GitOps tooling such as Argo CD or Flux.</li>
<li>Proficiency in Go, Python, or Java.</li>
<li>Experience with Google Cloud, AWS, or Azure.</li>
<li>Comfortable with Agile methodologies and Git.</li>
<li>Experience troubleshooting issues with users.</li>
</ul>
<p><strong>Benefits &amp; Perks</strong></p>
<ul>
<li>A choice between medical plans with an option for 100% covered premiums</li>
<li>Fertility and adoption benefits</li>
<li>Access to supplemental insurance plans for additional coverage</li>
<li>Headspace mindfulness app subscription</li>
<li>Global Employee Assistance Program</li>
<li>Retirement benefits with employer match</li>
<li>Flexible paid time off</li>
<li>12 weeks paid parental leave and family care leave</li>
<li>Pretax commuter benefit</li>
<li>Education reimbursement</li>
<li>Employee donation match to community organizations</li>
<li>7 Global Employee Resource Groups (ERGs)</li>
<li>Dog-friendly workplace</li>
<li>Free lunch and snacks</li>
<li>Private rooftop</li>
<li>Hack week twice per year</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$110,500 - $178,250 USD</Salaryrange>
      <Skills>backend or platform engineering experience, CI/CD pipelines, Docker, Kubernetes, GitOps tooling, Go, Python, Java, Google Cloud, AWS, Azure, Agile methodologies, Git</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Squarespace</Employername>
      <Employerlogo>https://logos.yubhub.co/squarespace.com.png</Employerlogo>
      <Employerdescription>Squarespace is a design-driven platform helping entrepreneurs build brands and businesses online. It has a team of over 1,700 employees and is headquartered in New York City.</Employerdescription>
      <Employerwebsite>https://www.squarespace.com/about/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/squarespace/jobs/7789058</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b2637f59-e14</externalid>
      <Title>Full-Stack Software Engineer, Reinforcement Learning</Title>
      <Description><![CDATA[<p>As a Full-Stack Software Engineer in RL, you&#39;ll build the platforms, tools, and interfaces that power environment creation, data collection, and training observability. The quality of Claude&#39;s next generation depends on the quality of the data we train it on , and the systems you build are what make that data possible. You&#39;ll own product surfaces end-to-end , from backend services and APIs to the web UIs that researchers, external vendors, and thousands of data labelers use every day.\n\nYou don&#39;t need a background in ML research. What matters is that you can take an ambiguous, high-stakes problem and ship a polished, reliable product against it, fast. This team moves very quickly. Claude writes a lot of the code we commit, which means the bottleneck isn&#39;t typing , it&#39;s judgment, taste, and the ability to react to what researchers need next.\n\nYou&#39;ll iterate on data collection strategies to distill the knowledge of thousands of human experts around the world into our models, and you&#39;ll do it in a loop that closes in hours and days, not quarters or months.\n\nAnthropic&#39;s Reinforcement Learning organization leads the research and development that trains Claude to be capable, reliable, and safe. We&#39;ve contributed to every Claude model, with significant impact on the autonomy and coding capabilities of our most advanced models.\n\nOur work spans teaching models to use computers effectively, advancing code generation through RL, pioneering fundamental RL research for large language models, and building the scalable training methodologies behind our frontier production models.\n\nThe RL org is organized around four goals: solving the science of long-horizon tasks and continual learning, scaling RL data and environments to be comprehensive and diverse, automating software engineering end-to-end, and training the frontier production model.\n\nOur engineering teams build the environments, evaluation systems, data pipelines, and tooling that make all of this possible , from realistic agentic training environments and scalable code data generation to human data collection platforms and production training operations.\n\n### Responsibilities\n\n<em>   Build and extend web platforms for RL environment creation, management, and quality review , including environment configuration, versioning, and validation workflows\n</em>   Develop vendor-facing interfaces and tooling that let external partners create, submit, and iterate on training environments with minimal friction\n<em>   Design and implement platforms for human data collection at scale, including labeling workflows, quality assurance systems, and feedback mechanisms that surface reward signal integrity issues early\n</em>   Build evaluation dashboards and observability UIs that give researchers real-time insight into environment quality, training run health, and reward hacking\n<em>   Create backend services and APIs that connect environment authoring tools, data collection systems, and RL training infrastructure\n</em>   Build and expand scalable code data generation pipelines, producing diverse programming tasks with robust reward signals across languages and difficulty levels\n<em>   Develop onboarding automation and documentation tooling so new vendors and internal users ramp up in hours, not weeks\n</em>   Partner closely with RL researchers, data operations, and vendor management to translate ambiguous requirements into well-scoped, well-designed products\n\n### Requirements\n\n<em>   Strong software engineering fundamentals and real full-stack range , you&#39;re comfortable owning a surface from database schema to frontend\n</em>   Proficient in Python and a modern web stack (React, TypeScript, or similar)\n<em>   Track record of shipping systems that solved a hard problem, not just shipped on time , e.g. you built the thing that made your team 10x faster, or the internal tool nobody thought was possible\n</em>   Operate with high agency: you identify what needs to be done and drive it forward without waiting for a ticket\n<em>   Found yourself wondering &quot;why isn&#39;t this moving faster?&quot; in previous roles , and then have done something about it\n</em>   Care about UX and can build interfaces that are intuitive for both technical researchers and non-technical labelers\n<em>   Communicate clearly with researchers, operations teams, and engineers, and can turn vague asks into well-scoped work\n</em>   Thrive in a fast-moving environment where priorities shift, Claude is your pair programmer, and the next problem is often one nobody has solved before\n<em>   Care about Anthropic&#39;s mission to build safe, beneficial AI and want your work to contribute directly to it\n\n### Nice to Have\n\n</em>   Built data collection, labeling, or annotation platforms , ideally ones that had to scale across many vendors or many task types\n<em>   Background building multi-tenant platforms with role-based access, audit trails, and vendor management workflows\n</em>   Experience with cloud infrastructure (GCP or AWS), Docker, and CI/CD pipelines\n<em>   Familiarity with LLM training, fine-tuning, or evaluation workflows\n</em>   Experience with async Python (Trio, asyncio) or high-throughput API design\n<em>   Background in dashboards, monitoring, or observability tooling\n</em>   Experience working directly with external vendors or partners on technical integrations\n<em>   A background that isn&#39;t a straight line , e.g. math or physics into SWE, competitive programming, research into engineering, or a side project that outgrew its scope\n\n### Representative Projects\n\n</em>   Building a unified platform for human data collection that integrates labeling workflows, vendor management, and QA for complex agentic tasks\n<em>   Developing vendor onboarding automation that handles Docker registry access, API token management, and environment validation\n</em>   Creating evaluation and observability dashboards that catch reward hacks, measure environment difficulty, and give real-time feedback during production training\n<em>   Building environment quality review workflows that let researchers browse, grade, and provide feedback on training environments\n</em>   Developing automated environment quality pipelines that validate correctness and difficulty calibration before environments hit production training\n*   Building internal tools for browsing and analyzing training run results, environment statistics, and data collection progress</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Modern web stack, React, TypeScript, Strong software engineering fundamentals, Full-stack range, Database schema, Frontend, Cloud infrastructure, Docker, CI/CD pipelines, LLM training, Fine-tuning, Evaluation workflows, Async Python, High-throughput API design, Dashboards, Monitoring, Observability tooling, Data collection, Labeling, Annotation platforms, Multi-tenant platforms, Role-based access, Audit trails, Vendor management workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company working on developing artificial intelligence systems. It has a quickly growing team of researchers, engineers, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5186067008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2a2d718a-f65</externalid>
      <Title>Senior Software Engineer, AI Platform and Enablement</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re building a next-generation AI-powered platform and web application for creating audio and video content quickly and easily. This involves developing a revolutionary way to record, transcribe, edit, and mix audio and video on the web using state-of-the-art AI models,a challenge that requires solving complex technical problems. We&#39;re hiring a senior engineer to join our AI Platform and Enablement team. The ideal candidate thrives in a fast-moving, high-ownership environment and is comfortable navigating the ambiguity of bringing research work into an established product.</p>
<p><strong>About the Team</strong></p>
<p>The team’s objective is to support integrating cutting-edge first-party models (developed by our in-house AI Research team) and third-party/open source AI models into the Descript product.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, maintain, and standardize third-party model integrations, including consulting for other engineering teams with AI model integration needs</li>
</ul>
<ul>
<li>Design, implement, and maintain our AI infrastructure supporting our machine learning life cycle, including data ingestion pipelines, training developer experience and infrastructure, evaluation frameworks, and deployments / GPU infrastructure</li>
</ul>
<ul>
<li>Collaborate with Product Managers, Research Engineers, and AI Researchers to understand their infrastructure needs and ensure our AI systems are robust, scalable, and efficient</li>
</ul>
<ul>
<li>Optimise and scale our models and algorithms for efficient inference</li>
</ul>
<ul>
<li>Deploy, monitor, and manage AI models in production</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>Experience in deploying and managing AI models in production</li>
</ul>
<ul>
<li>Experience with the tools of large volume data pipelines like spark, flume, dask, etc.</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps and MLOps best practices</li>
</ul>
<ul>
<li>Strong problem-solving abilities and excellent communication skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Generous healthcare package</li>
</ul>
<ul>
<li>401k matching program</li>
</ul>
<ul>
<li>Catered lunches</li>
</ul>
<ul>
<li>Flexible vacation time</li>
</ul>
<p><strong>Fun fact about me: I love pineapple on pizza.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $286,000/year</Salaryrange>
      <Skills>Experience in deploying and managing AI models in production, Experience with the tools of large volume data pipelines like spark, flume, dask, etc., Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes), Knowledge of DevOps and MLOps best practices, Strong problem-solving abilities and excellent communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Descript</Employername>
      <Employerlogo>https://logos.yubhub.co/descript.com.png</Employerlogo>
      <Employerdescription>Descript is building a simple, intuitive, fully-powered editing tool for video and audio. It has 150 employees and is backed by OpenAI, Andreessen Horowitz, Redpoint Ventures, and Spark Capital.</Employerdescription>
      <Employerwebsite>https://descript.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/descript/jobs/7580335003</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>48e2e160-bde</externalid>
      <Title>Senior Solutions Architect - Weights &amp; Biases</Title>
      <Description><![CDATA[<p>Our Solutions Architecture team at Weights &amp; Biases is a unique hybrid organization, combining the deep technical skills of Site Reliability Engineering with the consultative expertise of Solutions Architecture. We focus on ensuring customers can successfully deploy and operate W&amp;B across cloud and on-prem environments while delivering a best-in-class experience that accelerates ML adoption at scale.</p>
<p>As a Solutions Architect, you will be responsible for managing complex customer deployments across AWS, GCP, Azure, and on-prem environments. You’ll partner directly with customer engineering teams to provision and monitor services, debug and resolve infrastructure issues, and ensure performance and scalability using SRE best practices. This role blends hands-on technical problem-solving with customer-facing engagement, including technical discussions, demos, workshops, and enablement content creation. You’ll work closely with Sales Engineering, Field Engineering, Support, and Product to drive adoption and influence our product roadmap based on customer feedback.</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love diving into infrastructure problems and solving them systematically</li>
<li>You’re curious about how to scale complex ML systems in production environments</li>
<li>You’re an expert in building and running containerized, distributed systems</li>
</ul>
<p>We work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary ranges for this role is $180,000 to $200,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 to $200,000</Salaryrange>
      <Skills>Docker, Kubernetes, Helm charts, Networking, Cloud-managed services (e.g., MySQL, Object Stores), Infrastructure as Code (IaC), preferably Terraform, Linux/Unix command line experience, Python, ML workflows or tools, Deep proficiency in Kubernetes design patterns, including Operators, Familiarity with data engineering and MLOps tooling, Experience as an educator or facilitator for technical training sessions, workshops, or demos, SaaS, web service, or distributed systems operations experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4622845006</Applyto>
      <Location>Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f79572c2-264</externalid>
      <Title>Technical Support Engineer</Title>
      <Description><![CDATA[<p>The Technical Support Engineer prolet acts as a Starburst SME for a book of Majors and Strategic accounts. The role involves providing support for standard and custom deployments, answering technical questions, and assisting with supported LTS upgrades. The engineer will also be responsible for peer training and development, personal continued education, and contributing to reference documentation.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide support for standard and custom deployments</li>
<li>Answer break/fix and non-break/fix technical questions through SFDC ticketing system</li>
<li>Efficiently reproduce reported issues by leveraging tools (minikube, minitrino, docker-compose, etc.), identify root causes, and provide solutions</li>
<li>Open SEP and Galaxy bug reports in Jira and feature requests in Aha!</li>
</ul>
<p>LTS Upgrades:</p>
<ul>
<li>Provide upgrade support upon customer request</li>
<li>Customer must be on a supported LTS version at the time of request</li>
<li>TSE must communicate unsupported LTS requests to the Account team as these require PS services</li>
</ul>
<p>Monthly Technical check-ins</p>
<ul>
<li>Conduct regularly scheduled technical check-ins with each BU</li>
<li>Discuss open support tickets, provide updates on product bugs and provide best practice recommendations based on your observations and ticket trends</li>
</ul>
<ul>
<li>Responsible for ensuring customer environments are on supported LTS versions</li>
</ul>
<p>Knowledge Sharing/Technical Enablement:</p>
<ul>
<li>Knowledge exchange and continued technical enablement are crucial for the development of our team and the customer experience</li>
<li>It&#39;s essential that we keep our product expertise and documentation current and that all team members have access to information</li>
</ul>
<ul>
<li>Contribute to our reference documentation</li>
<li>Lead peer training</li>
<li>Consultant to our content teams</li>
<li>Own your personal technical education journey</li>
</ul>
<p>Project Involvement</p>
<ul>
<li>Contribute to or drive components of departmental and cross-functional initiatives</li>
</ul>
<p>Partner with Leadership</p>
<ul>
<li>Identify areas of opportunity with potential solutions for inefficiencies or obstacles within the team and cross-functionally</li>
<li>Provide feedback to your manager on continued ed. opportunities, project ideas, etc.</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of support experience</li>
<li>3+ years of Big Data, Docker, Kubernetes and cloud technologies experience</li>
<li>Ability to Travel: This role will require 25% in-person travel for purposes including but not limited to new hire onboarding, team and department offsites, customer engagements, and other company events</li>
</ul>
<p>Skills</p>
<ul>
<li>Big Data (Hadoop, Data Lakes, Spark)</li>
<li>Docker and Kubernetes</li>
<li>Cloud technologies (AWS, Azure, GCP)</li>
<li>Security - Authentication (LDAP, OAuth2.0) and Authorization technologies</li>
<li>SSL/TLS</li>
<li>Linux Skills</li>
<li>DBMS Concepts/SQL Exposure Languages: SQL, Java, Python, Bash</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data, Docker, Kubernetes, Cloud technologies, Security, Linux Skills, DBMS Concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starburst</Employername>
      <Employerlogo>https://logos.yubhub.co/starburst.io.png</Employerlogo>
      <Employerdescription>Starburst is a data platform company that provides analytics, applications, and AI services. It has customers in over 60 countries.</Employerdescription>
      <Employerwebsite>https://www.starburst.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/starburst/jobs/5124882008</Applyto>
      <Location>Hyderabad, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bc54ed6c-ca0</externalid>
      <Title>Full-Stack Engineer, Core Services (Senior Level)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Full-Stack Engineer to join our Core Services team. As a senior-level engineer, you&#39;ll design, build, and optimise the core systems and management platforms that power the Instabase platform.</p>
<p>This is a high-impact role for a &#39;product-minded engineer&#39;. In our Core Services team, we treat our platform as a product. Because we operate with a lean team, you will have end-to-end ownership: from writing Product Requirement Documents (PRDs) and building the high-performance backend services and scalable infrastructure that support them.</p>
<p>Responsibilities:</p>
<ul>
<li>Full Stack Development: You will function as a product-minded engineer for our internal platform. This involves architecting secure infrastructure (Kubernetes, Docker) and backend services (Go, Python, PostgresDB), while also building the frontend interfaces (React, TS) to support features.</li>
</ul>
<ul>
<li>Developer Experience: Create the internal platforms and dashboards that improve developer velocity, reliability, and observability across the entire organisation.</li>
</ul>
<ul>
<li>Technical Leadership: Act as a technical leader who mentors junior engineers, contributes to the entire infrastructure codebase, and identifies root causes for critical system issues.</li>
</ul>
<p>About you:</p>
<ul>
<li>Education: BS, MS, or PhD in Computer Science, or equivalent experience in a technical field such as Physics or Mathematics.</li>
</ul>
<ul>
<li>Experience: 5+ years of professional software development experience with a strong foundation in CS fundamentals.</li>
</ul>
<ul>
<li>Backend Expertise: Proficiency in Go and Python, with a deep understanding of building scalable backend services and APIs.</li>
</ul>
<ul>
<li>Frontend Expertise: Strong experience with React, TypeScript, and JavaScript for building complex, data-rich web applications.</li>
</ul>
<ul>
<li>Infrastructure &amp; Orchestration: Proficiency with Docker, Kubernetes, and cloud infrastructure (AWS, GCP, or Azure).</li>
</ul>
<ul>
<li>Product Thinking &amp; UI Design: You are comfortable functioning as your own PM and Designer and write technical specs (PRDs) to define how users interact with infrastructure.</li>
</ul>
<ul>
<li>Communication: Excellent communication skills to represent technical and product decisions to the wider engineering team.</li>
</ul>
<p>Good to have:</p>
<ul>
<li>Experience with React Native for mobile or cross-platform applications.</li>
</ul>
<ul>
<li>Prior experience in a startup environment where you handled multi-functional responsibilities (Dev, PM, and Design).</li>
</ul>
<p>Compensation: The base salary range for this role is $190,000 to $205,000 + bonus, equity and US benefits.</p>
<p>US Benefits:</p>
<ul>
<li>Flexible PTO: Because life is better when you actually live it!</li>
</ul>
<ul>
<li>Comprehensive Coverage: Top-notch medical, dental, and vision insurance.</li>
</ul>
<ul>
<li>401(k) with Matching: We’ve got your back for a secure future.</li>
</ul>
<ul>
<li>Parental Leave &amp; Fertility Benefits: Supporting you in growing your family, your way.</li>
</ul>
<ul>
<li>Therapy Sessions Covered: Mental health matters, 10 free sessions through Samata Health.</li>
</ul>
<ul>
<li>Wellness Stipend: For gym memberships, fitness tech, or whatever keeps you thriving.</li>
</ul>
<ul>
<li>Lunch on Us: Enjoy a lunch credit when you&#39;re in the office.</li>
</ul>
<p>#LI-Hybrid</p>
<p>Instabase is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,000 to $205,000 + bonus, equity and US benefits</Salaryrange>
      <Skills>Go, Python, PostgresDB, Kubernetes, Docker, React, TypeScript, JavaScript, Cloud infrastructure (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Instabase</Employername>
      <Employerlogo>https://logos.yubhub.co/instabase.com.png</Employerlogo>
      <Employerdescription>Instabase provides a platform for organisations to solve unstructured data problems using AI.
It has customers representing large and complex organisations worldwide.</Employerdescription>
      <Employerwebsite>https://www.instabase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/instabase/jobs/8186577002</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f80914c-588</externalid>
      <Title>Distributed Systems Engineer - Data Platform (Delivery, Database, Retrieval)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About Role</p>
<p>We are looking for experienced and highly motivated engineers to join our DATA Org and help build the future of data at Cloudflare. Our organisation is responsible for the entire data lifecycle - from ingestion and processing to storage and retrieval - powering the critical logs and analytics that provide our customers with real-time visibility into the health and performance of their online properties.</p>
<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business. We build and maintain a suite of high-performance, scalable systems that handle more than a billion events in a second.</p>
<p>As an engineer in our organisation, you will have the opportunity to work on complex distributed systems challenges across different parts of our data stack.</p>
<p><strong>Responsibilities</strong></p>
<p>As a Software Engineer in our Data Organisation depending on the team you join, you will focus on a subset of the following areas:</p>
<ul>
<li>Design, develop, and maintain scalable and reliable distributed systems across the entire data lifecycle.</li>
</ul>
<ul>
<li>Build and optimise key components of our high-throughput data delivery platform to ensure data integrity and low-latency delivery.</li>
</ul>
<ul>
<li>Develop new and improve existing components for the Cloudflare Analytical Platform to extend functionality and performance.</li>
</ul>
<ul>
<li>Scale, monitor, and maintain the performance of our large-scale database clusters to accommodate the growing volume of data.</li>
</ul>
<ul>
<li>Develop and enhance our customer-facing GraphQL APIs, log delivery, and alerting solutions, focusing on performance, reliability, and user experience.</li>
</ul>
<ul>
<li>Work to identify and remove bottlenecks across our data platforms, from streamlining data ingestion processes to optimizing query performance.</li>
</ul>
<ul>
<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>
</ul>
<ul>
<li>Collaborate with the ClickHouse open-source community to add new features and contribute to the upstream codebase.</li>
</ul>
<ul>
<li>Participate in the development of the next generation of our data platforms, including researching and evaluating new technologies and approaches.</li>
</ul>
<p><strong>Key Qualifications</strong></p>
<ul>
<li>3+ years of experience working in software development covering distributed systems and databases.</li>
</ul>
<ul>
<li>Strong programming skills (Golang is preferable), as well as a deep understanding of software development best practices and principles.</li>
</ul>
<ul>
<li>Hands-on experience with modern observability stacks, including Prometheus, Grafana, and a strong understanding of handling high-cardinality metrics at scale.</li>
</ul>
<ul>
<li>Strong knowledge of SQL and database internals, including experience with database design, optimisation, and performance tuning.</li>
</ul>
<ul>
<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>
</ul>
<ul>
<li>Strong analytical and problem-solving skills, with a willingness to debug, troubleshoot, and learn about complex problems at high scale.</li>
</ul>
<ul>
<li>Ability to work collaboratively in a team environment and communicate effectively with other teams across Cloudflare.</li>
</ul>
<ul>
<li>Experience with ClickHouse is a plus.</li>
</ul>
<ul>
<li>Experience with data streaming technologies (e.g., Kafka, Flink) is a plus.</li>
</ul>
<ul>
<li>Experience developing and scaling APIs, particularly GraphQL, is a plus.</li>
</ul>
<ul>
<li>Experience with Infrastructure as Code tools like SALT or Terraform is a plus.</li>
</ul>
<ul>
<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>
</ul>
<p>If you&#39;re passionate about building scalable and performant data platforms using cutting-edge technologies and want to work with a world-class team of engineers, then we want to hear from you!</p>
<p>Join us in our mission to help build a better internet for everyone!</p>
<p>This role requires flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.</p>
<p>Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>
<p>Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver.</p>
<p>This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever.</p>
<p>We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Distributed systems, SQL, Database internals, Prometheus, Grafana, ClickHouse, Linux container technologies, Docker, Kubernetes, Data streaming technologies, API development, Infrastructure as Code tools, Graphql</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a global network that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7267602</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a438f945-411</externalid>
      <Title>Senior Site Reliability Engineer (Resilience) - Platform Resilience</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Site Reliability Engineer (SRE) to join our Platform Engineering department. As an SRE, you will lead technical initiatives to automate system engineering efforts, ensuring the reliability of our global infrastructure. You will grow our global Platform infrastructure to meet increasing scaling demands by developing and maintaining software, tooling, and automations.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop and maintain software, tooling, and automations to ensure the reliability and scalability of our global infrastructure.</li>
</ul>
<ul>
<li>Lead technical initiatives to automate system engineering efforts, ensuring the reliability of our global infrastructure.</li>
</ul>
<ul>
<li>Collaborate with engineers to identify, implement, and deliver solutions that meet the needs of our customers.</li>
</ul>
<ul>
<li>Champion an environment focused on collaboration, operational excellence, and uplifting others.</li>
</ul>
<ul>
<li>Respond to and prevent repeated customer impact in response to major incidents and prioritized problem management.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability.</li>
</ul>
<ul>
<li>Background in software engineering to collaborate with engineers to expertly identify, implement, and deliver solutions.</li>
</ul>
<ul>
<li>Experience in public cloud and managed Kubernetes services is advantageous.</li>
</ul>
<ul>
<li>Passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Operated a SaaS product in a public cloud ideally built using Infrastructure-as-Code tooling such as Crossplane or Terraform.</li>
</ul>
<ul>
<li>Built or operated a Kubernetes-at-scale infrastructure, ideally across multiple cloud providers, and the vital automation to support it.</li>
</ul>
<ul>
<li>Written non-trivial programs in Golang or other programming languages.</li>
</ul>
<ul>
<li>Worked with containerized services (such as Docker).</li>
</ul>
<ul>
<li>Proven experience in leading and improving alerting and major incident management standard processes metrics systems (e.g. Elastic Stack, Graphite, Prometheus, Influx) to diagnose issues and quantify impacts to present to others at varying levels of the organization.</li>
</ul>
<ul>
<li>Experienced in system administration with professional skills in Linux on distributed systems at scale.</li>
</ul>
<ul>
<li>Diagnosed or designed, implemented, and created solutions with the Elastic Stack.</li>
</ul>
<ul>
<li>Thrived in a self-organizing and sharing in a globally distributed team environment.</li>
</ul>
<ul>
<li>Strengthened team members in bringing out the best of each other by uplifting others with coaching and mentoring.</li>
</ul>
<p>Compensation:</p>
<ul>
<li>This role is eligible to participate in Elastic&#39;s stock program.</li>
</ul>
<ul>
<li>Total rewards package includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</li>
</ul>
<ul>
<li>Typical starting salary range for this role is $154,800-$195,600 USD.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$154,800-$195,600 USD</Salaryrange>
      <Skills>Software engineering, Public cloud, Managed Kubernetes services, Infrastructure-as-Code tooling, Containerized services, System administration, Linux on distributed systems, Golang, Crossplane, Terraform, Docker, Elastic Stack, Graphite, Prometheus, Influx</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic develops a search and analytics platform used by over 50% of the Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7794016</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>40fb0673-f64</externalid>
      <Title>Technical Support Engineer</Title>
      <Description><![CDATA[<p>The Technical Support Engineer acts as a Starburst SME for a book of Majors and Strategic accounts. The role involves providing support for standard and custom deployments, answering technical questions, and assisting with supported LTS upgrades. The TSE is also responsible for peer training and development, personal continued education, and contributing to our reference documentation.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide support for standard and custom deployments</li>
<li>Answer break/fix and non-break/fix technical questions through SFDC ticketing system</li>
<li>Efficiently reproduce reported issues by leveraging tools (minikube, minitrino, docker-compose, etc.), identify root causes, and provide solutions</li>
<li>Open SEP and Galaxy bug reports in Jira and feature requests in Aha!</li>
</ul>
<p>LTS Upgrades:</p>
<ul>
<li>Provide upgrade support upon customer request</li>
<li>Customer must be on a supported LTS version at the time of request</li>
<li>TSE must communicate unsupported LTS requests to the Account team as these require PS services</li>
</ul>
<p>Monthly Technical check-ins</p>
<ul>
<li>Conduct regularly scheduled technical check-ins with each BU</li>
<li>Discuss open support tickets, provide updates on product bugs and provide best practice recommendations based on your observations and ticket trends</li>
</ul>
<p>Knowledge Sharing/Technical Enablement:</p>
<ul>
<li>Knowledge exchange and continued technical enablement are crucial for the development of our team and the customer experience</li>
<li>It&#39;s essential that we keep our product expertise and documentation current and that all team members have access to information</li>
</ul>
<ul>
<li>Contribute to our reference documentation</li>
<li>Lead peer training</li>
<li>Consultant to our content teams</li>
<li>Own your personal technical education journey</li>
</ul>
<p>Project Involvement</p>
<ul>
<li>Contribute to or drive components of departmental and cross functional initiatives</li>
</ul>
<p>Partner with Leadership</p>
<ul>
<li>Identify areas of opportunity with potential solutions for inefficiencies or obstacles within the team and cross-functionally</li>
<li>Provide feedback to your manager on continued ed. opportunities, project ideas, etc.</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of support experience</li>
<li>3+ years of Big Data, Docker, Kubernetes and cloud technologies experience</li>
<li>Ability to Travel: This role will require 25% in-person travel for purposes including but not limited to new hire onboarding, team and department offsites, customer engagements, and other company events.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>265 000 zł-335 000 zł PLN</Salaryrange>
      <Skills>Big Data, Docker, Kubernetes, Cloud technologies, Security - Authentication, Authorization technologies, SSL/TLS, Linux Skills, DBMS Concepts/SQL Exposure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starburst</Employername>
      <Employerlogo>https://logos.yubhub.co/starburst.io.png</Employerlogo>
      <Employerdescription>Starburst is a data platform company that provides analytics, applications, and AI solutions. It has customers in over 60 countries.</Employerdescription>
      <Employerwebsite>https://www.starburst.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/starburst/jobs/5034562008</Applyto>
      <Location>Warsaw, Poland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9c1bbf0d-969</externalid>
      <Title>Backend Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Backend Engineer to join our team. As a Backend Engineer, you will work on xAI&#39;s production systems that power the API. You will design, implement, and maintain reliable and horizontally scalable distributed systems. Our backend infrastructure is written in Rust, so familiarity with a compiled language such as C++, Rust, or Go is highly beneficial.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain reliable and horizontally scalable distributed systems</li>
<li>Work closely with the team to identify and solve pain points</li>
<li>Collaborate with the team to ensure high-quality code and architecture</li>
<li>Participate in code reviews and contribute to the improvement of the codebase</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Expert knowledge of either Rust or C++</li>
<li>Experience in designing, implementing, and maintaining reliable and horizontally scalable distributed systems</li>
<li>Knowledge of service observability and reliability best practices</li>
<li>Experience in operating commonly used databases such as PostgreSQL, Clickhouse, and MongoDB</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Knowledge of Python</li>
<li>Experience with Docker, Kubernetes, and containerized applications</li>
<li>Expert knowledge of gRPC (unary, response streaming, bi-directional streaming, REST mapping)</li>
<li>Hands-on experience with LLM APIs, embeddings, or RAG patterns</li>
<li>Track record of delivering user-facing software at scale</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, C++, Distributed Systems, Service Observability, Database Management, Python, Docker, Kubernetes, gRPC, LLM APIs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.ai.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://x.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4991448007</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ea9aa5d2-721</externalid>
      <Title>Data Engineer Intern (Summer 2026)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. We run one of the world&#39;s largest networks that powers millions of websites and other Internet properties.</p>
<p>This internship is targeting students with experience and interest in Data Engineering. The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed.</p>
<p>Responsibilities</p>
<ul>
<li>Work through all stages of a data solution lifecycle – analyse / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics</li>
<li>Knowledge of modern enterprise data architectures, design patterns, and data tool sets and the ability to apply them</li>
<li>Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives</li>
<li>Identify key business levers, establish cause &amp; effect, perform analysis, and communicate key findings to various stakeholders to facilitate data driven decision-making</li>
</ul>
<p>Requirements</p>
<ul>
<li>Currently enrolled in M.S in Computer Science, Engineering or related STEM field</li>
<li>Experience working with Go, Python, SQL, Java, or equivalent programming languages</li>
<li>Experience working with distributed systems (Spark etc.)</li>
<li>Hands-on experience in data pipelines/ frameworks development</li>
<li>Ability and interest to learn new skills and technologies quickly</li>
<li>Excellent communication and problem-solving skills</li>
<li>Ability to commit to a 12 week summer internship</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Familiarity with container based deployments such as Docker and Kubernetes</li>
<li>Experience with JavaScript, Typescript, and React</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, SQL, Java, Distributed systems, Data pipelines, Frameworks development, JavaScript, Typescript, React, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7374706</Applyto>
      <Location>In-Office</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45a87931-4a2</externalid>
      <Title>Security Engineer - Platform Security</Title>
      <Description><![CDATA[<p>We&#39;re seeking a talented and driven Security Engineer to join our Platform Security team. You will build cutting-edge security solutions to protect our Kubernetes-based infrastructure and advance secure AI-driven systems.</p>
<p>In this role, you will design and implement AI-powered security tools, proactively address vulnerabilities, and champion secure engineering practices across the organisation.</p>
<p>Ideal candidates are passionate about impactful innovation, excel at writing clean, efficient code, and thrive in fast-paced environments to support xAI&#39;s mission of creating a trusted and secure global digital platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build AI-driven security tooling and agents using Grok to identify, analyse, and mitigate vulnerabilities in the platform infrastructure and customer-facing application(s)</li>
</ul>
<ul>
<li>Proactively identify security problems to solve and own the design and implementation end-to-end</li>
</ul>
<ul>
<li>Collaborate and be a security champion while driving technical decisions across the organisation</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>3+ years of experience in fast-paced, high-impact environments, ideally at startups or tech-driven companies.</li>
</ul>
<ul>
<li>Expertise in Python, Rust, or Go, with strong problem-solving skills and a focus on clean, efficient code.</li>
</ul>
<ul>
<li>Certifications like CISA, CRISC, CGEIT, Security+, CASP+, or similar preferred.</li>
</ul>
<ul>
<li>Proven experience building tools or systems from scratch, with a focus on scalable solutions.</li>
</ul>
<ul>
<li>Proficiency in designing scalable backend architectures to support secure systems.</li>
</ul>
<ul>
<li>Familiarity with security testing frameworks (e.g., Burp Suite, OWASP ZAP, SAST/DAST).</li>
</ul>
<ul>
<li>Experience with Docker and Kubernetes for deploying and securing containerized applications.</li>
</ul>
<ul>
<li>Knowledge of software supply chain tools, including SBOM management and dependency scanning.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Experience developing AI-driven security tools or integrating AI into security workflows.</li>
</ul>
<ul>
<li>Familiarity with Kubernetes-based environments and securing cloud-native infrastructure.</li>
</ul>
<ul>
<li>Proven ability to drive technical decisions and influence security practices across teams.</li>
</ul>
<ul>
<li>A passion for challenging the status quo and building transformative security solutions.</li>
</ul>
<ul>
<li>Strong collaboration skills, with experience working in dynamic, cross-functional teams.</li>
</ul>
<ul>
<li>A sense of humour and adaptability to thrive in a fast-paced, mission-driven environment.</li>
</ul>
<p>ITAR Requirements:</p>
<p>To conform to U.S. Government export regulations, applicant must be a (i) U.S. citizen or national, (ii) U.S. lawful, permanent resident (aka green card holder), (iii) Refugee under 8 U.S.C. § 1157, or (iv) Asylee under 8 U.S.C. § 1158, or be eligible to obtain the required authorisations from the U.S. Department of State. Learn more about the ITAR here.</p>
<p>Compensation and Benefits:</p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Python, Rust, Go, Grok, Docker, Kubernetes, Burp Suite, OWASP ZAP, SAST/DAST, SBOM management, dependency scanning, AI-driven security tools, integrating AI into security workflows, Kubernetes-based environments, securing cloud-native infrastructure, driving technical decisions, influencing security practices, challenging the status quo, transformative security solutions, collaboration skills, dynamic cross-functional teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4835611007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d547efb6-77f</externalid>
      <Title>Senior Linux Systems Engineer</Title>
      <Description><![CDATA[<p>We are looking for a highly motivated Senior Linux Systems Engineer to join our Computing Team!</p>
<p>You will work on high-performance computing (HPC) systems that are part of our sequencing platform. The ideal candidate is a hands-on Linux expert who thrives on optimizing performance and building secure, scalable and reliable systems in a fast-paced environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain high-performance Linux systems supporting compute and data-intensive workloads</li>
<li>Optimise system performance through kernel and filesystem tuning; identify and eliminate I/O, memory, or network bottlenecks</li>
<li>Automate provisioning and configuration management using orchestration tools such as Ansible and Salt</li>
<li>Monitor and troubleshoot kernel, driver, and hardware issues; perform root cause analysis in partnership with data and engineering teams and propose long-term solutions</li>
<li>Ensure system reliability through regular patching, monitoring, and performance tuning</li>
<li>Maintain accurate system documentation, runbooks, and configuration baselines</li>
<li>Collaborate with software, hardware, and scientific teams to ensure platform reliability and scalability</li>
</ul>
<p>Qualifications, Skills, Knowledge &amp; Abilities:</p>
<ul>
<li>BS in Computer Science, Engineering, or related field</li>
<li>5+ years of experience designing and building high-performance physical Linux systems in high-throughput or mission-critical environments</li>
<li>Deep knowledge of Linux kernel, NFS and Linux file system performance tuning</li>
<li>Solid background in TCP/IP networking, routing, VLANs, and firewall rules</li>
<li>Experience with the latest CPU and GPU technologies</li>
<li>Proficiency in shell scripting (bash), working knowledge of Python, and familiarity with Ansible or similar configuration management tools</li>
<li>Proven hands-on experience building servers from components, diagnosing hardware failures, and working with vendors</li>
<li>Excellent documentation and communication skills</li>
<li>May occasionally be exposed to activity that requires pulling/lifting/moving/carrying up to 50 lbs</li>
<li>Experience with cloud computing infrastructure (e.g. AWS) and Docker desirable</li>
<li>Familiarity with security frameworks and compliance standards (e.g. ISO 27001) a plus</li>
</ul>
<p>At Ultima Genomics, your base pay is one part of your total compensation package. This role pays between $125,000 and $150,000, if performed in California, and your actual base pay will depend on your skills, qualifications, experience, and location.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$125,000 - $150,000</Salaryrange>
      <Skills>Linux, High-performance computing, Kernel and filesystem tuning, Ansible and Salt, TCP/IP networking, Routing, VLANs, Firewall rules, CPU and GPU technologies, Shell scripting, Python, Cloud computing infrastructure, Docker, Security frameworks and compliance standards</Skills>
      <Category>Engineering</Category>
      <Industry>Life Sciences</Industry>
      <Employername>Ultima Genomics</Employername>
      <Employerlogo>https://logos.yubhub.co/ultimagen.com.png</Employerlogo>
      <Employerdescription>Ultima Genomics is a rapidly growing life sciences technology company developing ground-breaking genomics technologies.</Employerdescription>
      <Employerwebsite>https://www.ultimagen.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/ultimagenomics/jobs/5649426004</Applyto>
      <Location>Fremont, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f94dea6d-70a</externalid>
      <Title>Distributed Systems Engineer - Data Platform - Analytical Database Platform</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>About Role</p>
<p>We are looking for an experienced and highly motivated engineer to join our team and contribute to our analytical database platform. The platform is a critical component of Cloudflare Analytics which provides real-time visibility into the health and performance of Cloudflare customers&#39; online properties.</p>
<p>The team builds and maintains a high-performance, scalable database platform powered by ClickHouse, optimized for analytical workloads. We help our customers, both internal and external, to gain a deeper understanding of their online properties, identify trends and patterns, and make informed decisions about how to optimize their web performance, security, and other key metrics.</p>
<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business.</p>
<p>As a Distributed systems engineer - Analytical Database Platform, you will:</p>
<ul>
<li>Develop and implement new platform components for the Cloudflare Analytical Database Platform to improve functionality and performance.</li>
<li>Add more database clusters to accommodate the growing volume of data generated by Cloudflare products and services.</li>
<li>Monitor and maintain the performance and reliability of existing database platform clusters, and identify and troubleshoot any issues that may arise.</li>
<li>Work to identify and remove bottlenecks within the analytics database platform, including optimizing query performance and streamlining data ingestion processes.</li>
<li>Collaborate with the ClickHouse open-source community to add new features and functionality to the database, as well as contribute to the development of the upstream codebase.</li>
<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>
<li>Participate in the development of the next generation of the database platform engine, including researching and evaluating new technologies and approaches that can improve the database&#39;s performance and scalability.</li>
</ul>
<p>Key qualifications:</p>
<ul>
<li>3+ years of experience working in software development covering distributed systems, and databases.</li>
<li>Strong programming skills (Golang, python, C++ are preferable), as well as a deep understanding of software development best practices and principles.</li>
<li>Strong knowledge of SQL and database internals, including experience with database design, optimization, and performance tuning.</li>
<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>
<li>Ability to work collaboratively in a team environment, as well as communicate effectively with other teams across Cloudflare.</li>
<li>Strong analytical and problem-solving skills, as well as the ability to work independently and proactively identify and solve issues.</li>
<li>Experience with ClickHouse is a plus.</li>
<li>Experience with SALT or Terraform is a plus.</li>
<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>
</ul>
<p>If you&#39;re passionate about building scalable and performant databases using cutting-edge technologies, and want to work with a world-class team of engineers, then we want to hear from you!</p>
<p>Join us in our mission to help build a better internet for everyone!</p>
<p>This role may require flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>distributed systems, databases, software development, Golang, python, C++, SQL, database design, optimization, performance tuning, algorithms, data structures, concurrency, ClickHouse, SALT, Terraform, Linux container technologies, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/4886734</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6556c9a6-357</externalid>
      <Title>Senior Professional Services, Technical Architect - AI</Title>
      <Description><![CDATA[<p>As a Senior Professional Services Technical Architect, AI at GitLab, you&#39;ll be an embedded expert who helps customers move from ideas to production. You&#39;ll work directly with customer teams as a consultative partner, running in-depth discovery to understand their environment and priorities, then designing and delivering solutions that connect business goals to architecture and implementation.</p>
<p>This is a deeply technical, customer-facing role where you&#39;ll build and deploy Custom Agents, Custom Flows, and CI/CD integrations. You&#39;ll own delivery end-to-end, from prototype through production support. You&#39;ll partner closely with Professional Services and Customer Success stakeholders, including Professional Services Engineers, Project Managers, Customer Success Managers, and Solution Architects.</p>
<p>Some examples of our projects include leading customer discovery and defining a prioritized GitLab Duo Agent Platform use case roadmap tied to clear success criteria, designing and delivering production-ready GitLab Duo Agent Platform implementations, building rapid prototypes to demonstrate the art of the possible with agentic AI, and integrating the GitLab Duo Agent Platform with customer systems and workflows using GitLab APIs, pipeline configuration, and infrastructure as code.</p>
<p>What you&#39;ll do:</p>
<p>Conduct deep customer discovery to understand business goals, technical constraints, and organizational dynamics, and translate them into clear problem statements and a prioritized use case plan for GitLab Duo Agent Platform.</p>
<p>Partner with customer stakeholders across engineering, security, compliance, and business teams to align on success criteria, milestones, and adoption strategy for AI workflows in production.</p>
<p>Design, build, and deploy production-ready GitLab Duo Agent Platform solutions, including Custom Agents, Custom Flows, and CI/CD integrations that map to validated customer use cases.</p>
<p>Embed with customer engineering teams to deliver hands-on implementations end-to-end, from prototype to production rollout, troubleshooting, and optimization.</p>
<p>Configure and integrate platform foundations such as runners, network access, runtime sandboxing, GitLab APIs (REST and GraphQL), and AI governance controls (for example, role-based access control and model policies) to meet enterprise requirements.</p>
<p>Measure and communicate impact using DORA (DevOps Research and Assessment) metrics, AI Impact Analytics, and Value Stream Analytics, and use those insights to guide iteration and expansion of successful use cases.</p>
<p>Codify repeatable deployment patterns, reusable assets, and lessons learned, contributing back to GitLab through documentation, accelerators, and product feedback informed by field experience.</p>
<p>Travel up to 50% for customer site engagements and company onsite events to support delivery, onboarding, and stakeholder alignment.</p>
<p>What you&#39;ll bring:</p>
<p>Demonstrated experience leading customer-facing technical engagements, from discovery through production rollout, with ownership of outcomes.</p>
<p>Proficiency in Python, with experience building and operating production-grade applications and integrations.</p>
<p>Experience delivering with GitLab CI/CD, including pipeline design, YAML configuration, and using GitLab APIs (REST and GraphQL).</p>
<p>Hands-on experience with infrastructure as code (for example, Terraform or Ansible) and deploying solutions into enterprise environments.</p>
<p>Working knowledge of large language model (LLM) capabilities and limitations, including prompt engineering and building agentic workflows (such as Custom Agents and Custom Flows).</p>
<p>Experience with Docker, container orchestration concepts, and runner configuration in secure environments.</p>
<p>Familiarity with DevSecOps practices, including security controls, access management, and compliance requirements that impact deployment design.</p>
<p>Strong written and verbal communication skills, with the ability to partner closely with customer stakeholders and translate business goals into technical plans in a remote, asynchronous environment.</p>
<p>About the team:</p>
<p>GitLab&#39;s Professional Services organization within Customer Success helps customers get value from the GitLab Duo Agent Platform. We&#39;re a remote, asynchronous team that works closely with customer-facing colleagues to support successful deployments. We focus on turning what we learn in the field into reusable assets, clearer documentation, and product feedback that helps improve GitLab Duo Agent Platform for future customers.</p>
<p>The base salary range for this role’s listed level is currently for residents of the United States only. This range is intended to reflect the role&#39;s base salary rate in locations throughout the US. Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, alignment with market data, and geographic location. The base salary range does not include any bonuses, equity, or benefits. See more information on our benefits and equity. Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary. United States Salary Range $164,880-$247,320 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$164,880-$247,320 USD</Salaryrange>
      <Skills>Python, GitLab CI/CD, Infrastructure as Code, Docker, Container Orchestration, DevSecOps, Large Language Model (LLM), Prompt Engineering, Agentic Workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8334735002</Applyto>
      <Location>Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba0a936c-9b5</externalid>
      <Title>Partner Solution Architect (pre-sales)</Title>
      <Description><![CDATA[<p>We are looking for a Partner Solutions Architect to lead technical strategy and enablement for our ecosystem in the ANZ region. This is a hands-on builder role. You will be responsible for ensuring our partners are not only articulating Elastic&#39;s value but are technically capable of architecting, building, and validating complex solutions.</p>
<p>As a Partner Solutions Architect, you will:</p>
<ul>
<li>Own Technical Engagement Plans (TEPs) for focus partners, establishing long-term technical roadmaps at the CTO and Practice Lead level.</li>
<li>Guide partners through high-stakes Technical Validation cycles, ensuring Elastic solutions are built to best-practice standards.</li>
<li>Lead &#39;one-to-many&#39; technical &#39;Build-a-thons&#39; and hands-on laboratory sessions that empower partner engineers to lead their own implementations.</li>
<li>Build deep relationships with partner pre-sales teams to guide them through the &#39;how-to&#39; of complex Search AI, Observability, and Security architectures at the configuration level.</li>
<li>Collaborate on &#39;design wins&#39; by developing repeatable technical blueprints.</li>
</ul>
<p>To be successful in this role, you will require:</p>
<ul>
<li>Direct, hands-on experience with the Elastic Stack (ELK) or similar distributed search/analytics technologies (e.g., OpenSearch, Solr, Splunk, Datadog).</li>
<li>8+ years of experience in technical roles.</li>
<li>Proven ability to design and build technical prototypes, ingest complex datasets, and optimize search/indexing performance.</li>
<li>Hands-on experience with Kubernetes, Docker, and Infrastructure as Code (Terraform) on AWS, Azure, or GCP.</li>
<li>3+ years in a partner-facing role, with a focus on building technical practices and enabling third-party engineering teams.</li>
<li>The ability to translate deep technical capabilities into scalable partner-led solutions.</li>
</ul>
<p>If you are a motivated and experienced professional with a passion for technology and partnership development, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Elastic Stack (ELK), OpenSearch, Solr, Splunk, Datadog, Kubernetes, Docker, Infrastructure as Code (Terraform), AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a Search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. Their platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7757097</Applyto>
      <Location>Sydney, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ca21d379-481</externalid>
      <Title>AI Solutions Engineer, Post Sales- W&amp;B</Title>
      <Description><![CDATA[<p>The Field Engineering team at Weights &amp; Biases plays a vital role in ensuring customer success and adoption of our platform. As part of this team, we partner with Sales, Support, Product, and Engineering to lead technical success after the sales process.</p>
<p>We work closely with some of the most advanced AI teams in the world, helping them build, optimize, and scale their ML and GenAI workflows across industries such as computer vision, robotics, natural language processing, and large language models (LLMs).</p>
<p>We’re hiring an AI Solutions Engineer, Post-Sales to help customers solve real-world problems by enabling them to implement and scale ML pipelines and agentic workflows using Weights &amp; Biases. In this role, you’ll collaborate with engineering teams to ensure smooth onboarding and adoption, act as a trusted advisor on best practices, and represent the voice of the customer internally.</p>
<p>You will partner directly with leading AI teams to optimize workflows, share technical expertise, and influence our product roadmap based on real-world customer feedback.</p>
<p>This is an ideal opportunity for ML practitioners who are customer-focused and eager to work with top AI companies globally.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Collaborate with engineering teams to ensure smooth onboarding and adoption of Weights &amp; Biases</li>
<li>Act as a trusted advisor on best practices for implementing and scaling ML pipelines and agentic workflows</li>
<li>Represent the voice of the customer internally and influence our product roadmap based on real-world customer feedback</li>
<li>Partner directly with leading AI teams to optimize workflows and share technical expertise</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3–5 years of relevant experience in a similar role</li>
<li>Strong programming proficiency in Python</li>
<li>Hands-on experience enabling production-grade ML systems, with a focus on training and inference pipelines, experiment tracking, deployment patterns, and observability using deep learning frameworks (TensorFlow/Keras, PyTorch/PyTorch Lightning) and MLOps tooling (e.g. Airflow, Kubeflow, Ray, TensorRT)</li>
<li>Familiarity with cloud platforms (AWS, GCP, Azure)</li>
<li>Experience with GenAI/LLMs and related tools (e.g. LangChain/LangGraph, HuggingFace Transformers, Pinecone, Weaviate)</li>
<li>Strong experience with Linux/Unix</li>
<li>Excellent communication and presentation skills, both written and verbal</li>
<li>Ability to break down and solve complex problems through customer consultation and execution</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Background in robotics</li>
<li>TypeScript experience</li>
<li>Proficiency with Fastai, scikit-learn, XGBoost, or LightGBM</li>
<li>Background in data engineering, MLOps, or LLMOps, with tools such as Docker and Kubernetes</li>
<li>Familiarity with data pipeline tools</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Python, ML systems, deep learning frameworks, MLOps tooling, cloud platforms, GenAI/LLMs, Linux/Unix, communication and presentation skills, robotics, TypeScript, Fastai, scikit-learn, XGBoost, LightGBM, data engineering, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. It became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4651106006</Applyto>
      <Location>Livingston, NJ / New York, NY / Philadelphia, PA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44ca68dc-996</externalid>
      <Title>Senior Software Engineer - Fullstack</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer - Fullstack to join our team. As a Full Stack software engineer, you will work with your team and product management to make insights from data simple. We are looking for engineers that are customer obsessed, who can take on the full scope of the product and user experience beyond the technical implementation. You&#39;ll set the foundation for how we build robust, scalable and delightful products.</p>
<p>Some example experiences you&#39;ll create for our customers to achieve the full project lifecycle from loading data, visualizing results, creating statistical models, and deploying as production artifacts include:</p>
<p>Simple workflows to create, configure, and manage large-scale compute clusters, networks and data sources. Create, deploy, test, and upgrade complex data pipelines with powerful features to visualize data graphs. Seamless onboarding and management for all members of an organisation to become data-driven. Provide a great SQL-centric data exploration and dashboarding experience on Databricks. An interactive environment for collaborative data projects at massive scale with an easy path to production.</p>
<p>What we look for:</p>
<p>5+ years of experience with HTML, CSS, and JavaScript. Passion for user experience and design and a deep understanding of front-end architecture. Comfortable working towards a multi-year vision with incremental deliverables. Motivated by delivering customer value. Experience with modern JavaScript frameworks (e.g., React, Angular, or VueJs/Ember). 5+ years of experience with server-side web technologies (eg: Node.js, Java, Python, Scala, C#, C++,Go). Good knowledge of SQL. Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, or Kubernetes. Experience developing large-scale distributed systems.</p>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$157,700-$213,800 USD</Salaryrange>
      <Skills>HTML, CSS, JavaScript, Node.js, Java, Python, Scala, C#, C++, Go, SQL, AWS, Azure, GCP, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best Data Intelligence Platform, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6544403002</Applyto>
      <Location>Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c916726e-d71</externalid>
      <Title>Principal Software Engineer (Networking) - Platform</Title>
      <Description><![CDATA[<p>As a Principal Software Engineer (Networking) - Platform, you will lead technical initiatives for automating network engineering efforts to guarantee the reliability of the global Elastic infrastructure. You will grow our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, codebases, tooling and automations.</p>
<p>Collaborate in an environment with an inclusive approach, and focus on operational perfection which uplifts others. Prevent repeated customer impact in response to major incidents and prioritized problem management. Our on-call rotation is spread well, and we address complex customer concerns too.</p>
<p>You will participate in coding, innovating technical designs, crafting solutions, improving resilience, and prioritizing security, bug fixes, and features. For example, debugging Azure Networking for Elastic Cloud Serverless is part of our efforts, and we want your experience to contribute to a truly exceptional customer experience!</p>
<p>You will take an engineering approach in leading technical initiatives for automating network engineering efforts to guarantee the reliability of the global Elastic infrastructure. You will grow our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, codebases, tooling and automations.</p>
<p>You will collaborate in an environment with an inclusive approach, and focus on operational perfection which uplifts others. Prevent repeated customer impact in response to major incidents and prioritized problem management. Our on-call rotation is spread well, and we address complex customer concerns too.</p>
<p>You will participate in coding, innovating technical designs, crafting solutions, improving resilience, and prioritizing security, bug fixes, and features. For example, debugging Azure Networking for Elastic Cloud Serverless is part of our efforts, and we want your experience to contribute to a truly exceptional customer experience!</p>
<p>Success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability. We want to hear about your customer-first approach in solving operational problems for both today and the future.</p>
<p>Passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships. Examples of working in distributed teams or working remotely is desirable.</p>
<p>You have designed and built a SaaS product in a public cloud ideally built using Infrastructure-as-Code tooling such as Crossplane or Terraform</p>
<p>You have built Kubernetes-at-scale infrastructure, ideally across multiple cloud providers, and the vital automation to support it.</p>
<p>You have written product features or functions in Golang or other programming languages.</p>
<p>You have worked with containerized services (such as Docker).</p>
<p>You have proven results in leading and improving cross-team engineering initiatives.</p>
<p>You have experience in system administration with professional skills in Linux on distributed systems at scale.</p>
<p>You have diagnosed or designed, implemented and created solutions with the Elastic Stack.</p>
<p>You are experienced in a self-organizing and sharing in a globally distributed team environment.</p>
<p>You strengthen team members in bringing out the best of each other by uplifting others with coaching and mentoring.</p>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component. The typical starting salary range for new hires in this role is $189,800-$232,900 USD. In select locations (including Seattle WA, Los Angeles CA, the San Francisco Bay Area CA, and the New York City Metro Area), an alternate range may apply as specified below.</p>
<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$189,800-$232,900 USD</Salaryrange>
      <Skills>Software Engineering, Cloud Network Solutions, Public Cloud, Go, Managed Kubernetes Services, Linux, Distributed Systems, Elastic Stack, Infrastructure-as-Code, Crossplane, Terraform, Kubernetes, Containerized Services, Docker, System Administration, Golang, Programming Languages, SaaS Product Development, Kubernetes-at-Scale Infrastructure, Automation, Self-Organizing Team Environment, Coaching and Mentoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7565185</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>987aab7f-f67</externalid>
      <Title>Principal Solutions Architect</Title>
      <Description><![CDATA[<p>As a Principal Solutions Architect in GitLab&#39;s global Solutions Architecture Center of Excellence, you&#39;ll be the trusted technical advisor and pre-sales partner who helps customers unlock the full value of GitLab&#39;s AI-powered DevSecOps platform.</p>
<p>You will solve complex challenges across the software lifecycle by connecting GitLab, AI agents, security, and cloud-native capabilities to real business outcomes, guiding customers through digital transformation and modern software delivery.</p>
<p>Reporting into the Senior Director and acting as the AI subject matter expert on a team of specialists, you&#39;ll own technical strategy for strategic accounts, lead value stream and Proof of Value (PoV) engagements, and serve as the technical &#39;CTO&#39; for your accounts.</p>
<p>In your first year, you&#39;ll be focused on driving successful platform evaluations and adoption as part of the pre-sales process, shaping AI-led solution architectures, influencing product direction with field feedback, and creating reusable assets and providing thought leadership for raising GitLab&#39;s technical bar globally.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead technical discovery, architecture design, demos, and end-to-end evaluations (POC/POV) to validate GitLab as the preferred agentic, AI-powered DevSecOps platform for prospects and customers.</li>
</ul>
<ul>
<li>Drive AI-focused solution strategy as the team&#39;s AI subject matter expert, including competitive positioning and business value justifications.</li>
</ul>
<ul>
<li>Own the technical strategy and influence Customer Success Plans for assigned accounts, acting as the &#39;technical CTO&#39; to guide multi-team, multi-year transformation initiatives across the DevSecOps lifecycle.</li>
</ul>
<ul>
<li>Collaborate with Sales, Customer Success, Product Management, Engineering, and Marketing to shape account strategies, inform territory planning, and ensure successful platform adoption.</li>
</ul>
<ul>
<li>Provide advanced technical guidance during the pre-sales cycle, including tender and audit support, workshop design, and solving complex integration and implementation challenges.</li>
</ul>
<ul>
<li>Serve as the voice of the customer by translating real-world feedback into product requirements, documentation improvements, and roadmap input, especially for AI, security, and platform capabilities.</li>
</ul>
<ul>
<li>Create and share reusable technical assets such as reference architectures, working examples, best practice guides, and internal enablement content to scale impact across regions.</li>
</ul>
<ul>
<li>Mentor other Solutions Architects, contribute to global initiatives for the Center of Excellence, and act as an external industry authority through thought leadership, standards participation, and ecosystem relationships.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Expert resonance for the most strategic aspects of GitLab&#39;s product and customer personas while empowering the field with domain knowledge.</li>
</ul>
<ul>
<li>Deep hands-on expertise with AI, such as designing or implementing AI-powered solutions, advising on AI adoption, or acting as an AI subject matter expert for customers or internal teams.</li>
</ul>
<ul>
<li>Experience in technical pre-sales, software consulting, or similar roles where you connect complex technology to business outcomes.</li>
</ul>
<ul>
<li>Practical background in modern software development or operations, including CI/CD, DevSecOps practices, and related tooling.</li>
</ul>
<ul>
<li>Knowledge of cloud computing concepts and architectures, and how cloud services integrate into secure, scalable application delivery.</li>
</ul>
<ul>
<li>Ability to design and explain technical architectures that span multiple teams and phases of the software lifecycle, from planning through monitoring.</li>
</ul>
<ul>
<li>Skill in leading technical evaluations and workshops (for example, proofs of value or solution design sessions) with diverse stakeholders, from engineers to executives.</li>
</ul>
<ul>
<li>Strong communication, relationship-building, and stakeholder management skills, with the ability to act as a trusted advisor and customer advocate across sales, product, and engineering teams.</li>
</ul>
<ul>
<li>Openness to learning and growth, with experience building new skills over time; candidates with transferable experience in adjacent domains (for example security, data, or cloud architecture) are encouraged to apply.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>This role sits within GitLab&#39;s global Solutions Architecture Center of Excellence, our distributed team of subject matter experts focused on AI, application security, and monetization.</p>
<p>Our mission is to accelerate GitLab&#39;s market leadership by helping shape how customers adopt GitLab and partnering with Sales, Product, and Engineering to drive successful platform outcomes.</p>
<p>We collaborate asynchronously across regions, sharing best practices, reusable assets, and field insights that influence product direction and go-to-market motions.</p>
<p>As an AI-focused Solutions Architect on our team, you&#39;ll help tackle complex customer challenges around AI adoption, security, and value realization, while contributing to the technical standards, frameworks, and thought leadership that support GitLab&#39;s most strategic accounts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$138,600-$297,000 USD</Salaryrange>
      <Skills>AI, DevSecOps, Cloud Native, CI/CD, DevOps, Cloud Computing, Technical Architecture, Solution Design, Pre-Sales, Software Consulting, Machine Learning, Data Science, Security, Cloud Security, Containerization, Kubernetes, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, with over 50 million registered users and more than 50% of the Fortune 100 trusting their platform.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8341795002</Applyto>
      <Location>Remote, North America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a8092b6e-7f5</externalid>
      <Title>Bare Metal Support Engineer</Title>
      <Description><![CDATA[<p>As a Bare Metal Support Engineer at CoreWeave, you will be responsible for supporting, operating, and maintaining CoreWeave&#39;s extensive GPU fleet across our growing data centers in the U.S., Europe, and beyond.</p>
<p>You will work closely with customers, data center technicians, and engineering teams to ensure the reliability, performance, and scalability of our infrastructure.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Providing high-level support for customers utilizing bare-metal GPU fleets on CoreWeave Cloud.</li>
<li>Diagnosing, triaging, and investigating reported customer issues and high-priority incidents, identifying root causes and escalating when necessary.</li>
<li>Developing a deep understanding of customer workloads and use cases to provide tailored technical support.</li>
<li>Coordinating remote troubleshooting and hardware interventions with Data Center Technicians.</li>
<li>Creating and maintaining internal documentation, including troubleshooting guides, best practices, and knowledge base articles.</li>
<li>Participating in an on-call rotation to support production clusters and ensure operational reliability.</li>
<li>Collaborating with engineering teams to improve hardware reliability, software stability, and system performance.</li>
<li>Implementing automation and scripting to streamline support workflows and reduce manual interventions.</li>
<li>Performing in-depth log analysis and debugging across multiple layers of the stack (firmware, drivers, hardware).</li>
<li>Providing feedback to internal teams on common support issues to drive continuous improvements.</li>
<li>Working with networking teams to troubleshoot connectivity issues affecting customer workloads.</li>
<li>Supporting supercomputing infrastructure running GPU workloads at scale.</li>
<li>Driving operational excellence by refining internal processes and support methodologies.</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>Experience in data centers, GPU clusters, server deployments, system administration, or hardware troubleshooting.</li>
<li>Demonstrated experience driving resolutions and continuous improvements across cross-functional environments and teams within a data center environment.</li>
<li>Intermediate knowledge of Linux (Ubuntu, CentOS, or similar), including command-line proficiency.</li>
<li>Experience with NVIDIA GPUs, SuperMicro systems, Dell systems, high-performance computing (HPC), and large-scale data center environments.</li>
<li>Experience in networking fundamentals (TCP/IP, VLANs, DNS, DHCP) and troubleshooting tools.</li>
<li>Hands-on experience with firmware updates, BIOS configurations, and driver management.</li>
<li>Experience analyzing system logs and debugging issues across firmware, drivers, and hardware layers.</li>
<li>Experience working with Jira, Confluence, Notion, or other issue-tracking and documentation platforms.</li>
<li>Experience in scripting and automation (Python, Bash, Ansible, or similar).</li>
</ul>
<p>If you&#39;re a curious and analytical individual with a passion for problem-solving and a desire to work in a fast-paced environment, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$83,000 to $132,000</Salaryrange>
      <Skills>Linux, GPU clusters, server deployments, system administration, hardware troubleshooting, NVIDIA GPUs, SuperMicro systems, Dell systems, high-performance computing, large-scale data center environments, networking fundamentals, troubleshooting tools, firmware updates, BIOS configurations, driver management, system logs, debugging issues, Jira, Confluence, Notion, issue-tracking, documentation platforms, scripting, automation, Kubernetes, Docker, containerized infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that delivers a platform of technology, tools, and teams to enable innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4560350006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c571d7f7-d82</externalid>
      <Title>Engineering Manager - Storage</Title>
      <Description><![CDATA[<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform. As an Engineering Manager, you will work with your team to build mission-critical Lakebase services on the Databricks Platform at scale.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Drive continuous delivery within a team of experts in storage technology, distributed systems and Rust.</li>
<li>Manage the development and rollout of storage services that host millions of customer databases across dozens of regions</li>
<li>Partner with peer engineering teams across Databricks to co-evolve Lakebase services with our global infrastructure.</li>
<li>Lead operational excellence in 24/7 operation of our system</li>
</ul>
<p>The impact you will have:</p>
<ul>
<li>Hire great engineers to build an outstanding team.</li>
<li>Support engineers in their career development by providing clear feedback and develop engineering leaders.</li>
<li>Ensure high technical standards by instituting processes (architecture reviews, testing) and culture (engineering excellence).</li>
<li>Work with engineering and product leadership to build a long-term roadmap.</li>
<li>Coordinate execution and collaborate across teams to unblock cross-cutting projects.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Experience with building and shipping storage systems where correctness and performance are essential</li>
<li>BS (or higher) in Computer Science, or a related field</li>
<li>2+ years of experience building and leading a team of engineers working in a related system</li>
<li>Experience with build, release and deployment infrastructure technologies such as Spinnaker, Jenkins, Airflow, Docker, Kubernetes, Terraform, Bazel, etc.</li>
<li>Ability to attract, hire, and coach engineers who meet the Databricks hiring standards</li>
<li>Comfort working on cross-functional projects with the ability to deeply understand product and customer personas</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>storage technology, distributed systems, Rust, Spinnaker, Jenkins, Airflow, Docker, Kubernetes, Terraform, Bazel</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8476581002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1fba9a94-c7e</externalid>
      <Title>Staff Software Engineer (Core Resilience)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to join our Core Infrastructure team. As a Staff Software Engineer, you will work with engineering teams to design, develop, and deliver cloud-based infrastructure projects on a modern tech stack. You will drive evaluation, development, and rollout of new common microservices, operate, support, and upgrade shared services and frameworks, and collaborate with architects, QA, product owners, security, and operations engineers.</p>
<p>This is an opportunity to do career-defining work with a talented team of engineers and technically-minded managers. You will be joining a team that is fast, creative, and flexible, with a weekly release cycle and individual ownership. We expect great things from our engineers and reward them with stimulating new projects, new technologies, and the chance to have significant equity in a company that is about to change the cloud computing landscape forever.</p>
<p>Minimum required knowledge, skills, and abilities:</p>
<ul>
<li>Immense passion about doing the right thing to help Okta&#39;s technology stay ahead of its anticipated business growth</li>
<li>Solid technology chops in architecting, implementing, tuning, and debugging some of the largest cloud deployments in the enterprise world</li>
<li>Bachelor&#39;s degree in computer science or equivalent</li>
<li>7+ years of expansive programming experience in an object-oriented programming language like Java, especially in backend services</li>
<li>7+ years of experience working with MySQL or equivalent relational database systems</li>
<li>Demonstrated experience of working with ReST and good understanding of its fundamentals</li>
<li>Knowledge of Spring, Spring Boot, Hibernate, and Tomcat</li>
<li>Knowledge of AWS, Redis, Elasticsearch, and Docker</li>
<li>Familiarity with network security, authentication, and authorization as a nice-to-have</li>
<li>Demonstrably followed best software engineering principles</li>
<li>Experience with enterprise SaaS as a good-to-have</li>
<li>Familiarity with Agile software development process</li>
</ul>
<p>The Okta Experience:</p>
<ul>
<li>Supporting Your Well-being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, MySQL, ReST, Spring, Spring Boot, Hibernate, Tomcat, AWS, Redis, Elasticsearch, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity management solutions for businesses, with over 15,800 customers worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7124884</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1a7635f5-a02</externalid>
      <Title>Principal Software Engineer (Networking) - Platform</Title>
      <Description><![CDATA[<p>As a Principal Software Engineer (Networking) - Platform, you will be part of the Platform Engineering department, responsible for crafting, building, and improving the multi-cloud platform at scale for Elastic Cloud Hosted and Serverless. You will participate in coding, innovating technical designs, crafting solutions, improving resilience, and prioritizing security, bug fixes, and features.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Taking an engineering approach in leading technical initiatives for automating network engineering efforts to guarantee the reliability of the global Elastic infrastructure.</li>
<li>Growing our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, codebases, tooling, and automations.</li>
<li>Collaborating in an environment with an inclusive approach, and focusing on operational perfection which uplifts others.</li>
<li>Preventing repeated customer impact in response to major incidents and prioritized problem management.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>10+ years in Software Engineering with product success in delivering Cloud network solutions.</li>
<li>Experience in public cloud, Go, and managed Kubernetes services is advantageous.</li>
<li>Success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability.</li>
<li>Passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships.</li>
</ul>
<p>Bonus points include:</p>
<ul>
<li>Designing and building a SaaS product in a public cloud ideally built using Infrastructure-as-Code tooling such as Crossplane or Terraform.</li>
<li>Building Kubernetes-at-scale infrastructure, ideally across multiple cloud providers, and the vital automation to support it.</li>
<li>Writing product features or functions in Golang or other programming languages.</li>
<li>Working with containerized services (such as Docker).</li>
<li>Proven results in leading and improving cross-team engineering initiatives.</li>
<li>Experience in system administration with professional skills in Linux on distributed systems at scale.</li>
<li>Diagnosing or designing, implementing, and creating solutions with the Elastic Stack.</li>
<li>Experienced in a self-organizing and sharing in a globally distributed team environment.</li>
<li>Strengthening team members in bringing out the best of each other by uplifting others with coaching and mentoring.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Software Engineering, Cloud Network Solutions, Public Cloud, Go, Managed Kubernetes Services, Infrastructure-as-Code, Crossplane, Terraform, Golang, Containerized Services, Docker, System Administration, Linux, Distributed Systems, Kubernetes, Automation, Inclusive Communication, Coaching and Mentoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a search AI company that enables everyone to find the answers they need in real time, using all their data, at scale. Its search AI platform is used by over 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7713597</Applyto>
      <Location>Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9a2bbb70-2c0</externalid>
      <Title>Senior Software Engineer - Data Platform</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our team in Bengaluru, India. As a Senior Software Engineer at Databricks, you will be responsible for designing, developing, and deploying large-scale distributed systems, including backend, DDS, and full-stack engineering. You will work closely with our product management team to bring great user experiences to our customers.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and develop reliable and high-performance services and client libraries for storing and accessing large amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</li>
<li>Build scalable services using Scala, Kubernetes, and data pipelines, such as Apache Spark and Databricks.</li>
<li>Work on a SaaS platform or with Service-Oriented Architectures.</li>
<li>Collaborate with our DDS team to develop and deploy data-centric solutions using Apache Spark, Data Plane Storage, Delta Lake, and Delta Pipelines.</li>
<li>Develop and maintain high-quality code, following best practices and coding standards.</li>
<li>Participate in code reviews and provide feedback to improve the quality of the codebase.</li>
<li>Troubleshoot and resolve issues that arise during deployment and operation.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related field.</li>
<li>7+ years of production-level experience in one of the following languages: Python, Java, Scala, C++, or similar language.</li>
<li>Experience developing large-scale distributed systems from scratch.</li>
<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>
<li>Strong understanding of software design patterns and principles.</li>
<li>Excellent problem-solving skills and attention to detail.</li>
<li>Ability to work effectively in a team environment.</li>
<li>Strong communication and collaboration skills.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with Apache Spark, Data Plane Storage, Delta Lake, and Delta Pipelines.</li>
<li>Knowledge of cloud-based storage systems, such as AWS S3 and Azure Blob Store.</li>
<li>Familiarity with containerization using Docker and Kubernetes.</li>
<li>Experience with continuous integration and continuous deployment (CI/CD) pipelines.</li>
<li>Strong understanding of security principles and practices.</li>
<li>Familiarity with agile development methodologies and version control systems, such as Git.</li>
</ul>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>Databricks is an equal opportunities employer and welcomes applications from diverse candidates. We are committed to creating an inclusive and respectful work environment where everyone feels valued and empowered to contribute their best work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, C++, Apache Spark, Data Plane Storage, Delta Lake, Delta Pipelines, Kubernetes, Docker, Git, Agile development methodologies, Version control systems, Cloud-based storage systems, Containerization, Continuous integration and continuous deployment (CI/CD) pipelines, Security principles and practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7601580002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ac7263ce-de7</externalid>
      <Title>Engineering Manager (Institutional - Custody, Prime Onchain Wallet)</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you&#39;re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a very specific candidate who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>Our work culture is intense and isn&#39;t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there&#39;s no better place to be.</p>
<p>While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment.</p>
<p>Attendance is expected and fully supported.</p>
<p>The Prime Onchain Wallet team is looking for a leader to step in and lead a tightly knit group of highly talented and motivated engineers – someone who&#39;s genuinely passionate about paving the way for institutional clients to operate confidently on-chain.</p>
<p>This person will set the vision, bring clarity and momentum to execution, and partner across product, engineering, compliance, and go-to-market to turn complex constraints into simple, scalable solutions that institutions can trust.</p>
<p>Onchain is the new Online, it will transform the way we exchange value and increase economic freedom by creating new opportunities. Wallet is the new browser and has the opportunity to become a super app. Prime Onchain Wallet is the interface to manage on-chain assets &amp; interact with dapps.</p>
<p>Our team is building the operating system for businesses to operate on-chain. Businesses need the necessary enterprise tooling to operate on-chain and embrace this paradigm shift.</p>
<p>Web2 introduced a stack of web apps used by businesses: Salesforce, Slack, Gmail, Accounting software… to facilitate exchange of data &amp; information. They need a new stack of tools to operate on-chain.</p>
<p>We are building the only fully integrated solution that makes it simple &amp; secure to get started on-chain.</p>
<p>What you&#39;ll be doing:</p>
<ul>
<li>Lead the engineering teams responsible for building the mission critical systems powering institutional products that shape the crypto landscape.</li>
</ul>
<ul>
<li>Collaborate with engineers, designers, product managers, and senior leadership to translate our vision into a tangible roadmap.</li>
</ul>
<ul>
<li>Break down complex projects into smaller pieces and lead the iterative design and implementation process.</li>
</ul>
<ul>
<li>Be a thoughtful technical voice within the team, aiding in diligent architectural decisions and fostering a culture of high-quality and operational excellence.</li>
</ul>
<ul>
<li>Collaborate with Product and Engineering teams to ensure successful delivery and operation of complex, distributed systems at scale.</li>
</ul>
<ul>
<li>Coach your direct reports to have a positive impact on the organization and support their career growth.</li>
</ul>
<ul>
<li>Work closely with our talent organization to identify and recruit exceptional engineers who align with Coinbase&#39;s culture and contribute to our products.</li>
</ul>
<ul>
<li>Contribute to and take ownership of processes that drive engineering quality and meet our engineering SLAs.</li>
</ul>
<p>What we look for in you:</p>
<ul>
<li>At least 7 years of experience in software engineering.</li>
</ul>
<ul>
<li>At least 1 year of engineering management experience.</li>
</ul>
<ul>
<li>An ability to balance long-term strategic thinking with short-term planning.</li>
</ul>
<ul>
<li>Experience in creating, delivering, and operating multi-tenanted, distributed systems at scale.</li>
</ul>
<ul>
<li>You can be hands-on when needed – whether that’s writing/reviewing code or technical documents, participating in on-call rotations and leading incidents, or triaging/troubleshooting bugs.</li>
</ul>
<ul>
<li>Your passion for building an open financial system that brings the world together drives you to excel in this role.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>You have gone through a rapid growth in your company (from 10 to 100s of engineers).</li>
</ul>
<ul>
<li>You have experience with Blockchains (such as Bitcoin, Ethereum etc.).</li>
</ul>
<ul>
<li>You’ve worked with Golang, Ruby, Docker, Sinatra, Rails, Postgres.</li>
</ul>
<ul>
<li>You’ve built financial, high reliability or security systems.</li>
</ul>
<ul>
<li>Crypto-forward experience, including familiarity with onchain activity such as interacting with Ethereum addresses, using ENS, and engaging with dApps or blockchain-based services.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$218,025-$256,500 USD</Salaryrange>
      <Skills>software engineering, engineering management, distributed systems, multi-tenanted systems, code review, technical documentation, on-call rotations, incident management, bug triage, generative AI tools, copilots, LibreChat, Gemini, Glean, Golang, Ruby, Docker, Sinatra, Rails, Postgres, blockchain development, financial systems, high reliability systems, security systems, crypto-forward experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase Institutional builds cutting-edge platforms that allow thousands of the world&apos;s largest financial institutions to trade, custody, and participate in the global cryptoeconomy.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7650637</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2b0fc94e-4e4</externalid>
      <Title>Staff Engineer - Fullstack</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p><strong>The Product</strong></p>
<p>Okta’s Auth0 is an easy-to-implement authentication and authorization platform designed by developers for developers. We make access to applications safe, secure, and seamless for the more than 100 million daily logins around the world. Our modern approach to identity enables this Tier-0 global service to deliver convenience, privacy, and security so customers can focus on innovation.</p>
<p><strong>The Team</strong></p>
<p>The Enablement team is at the core of expanding Auth0&#39;s capabilities for B2B customers, enabling seamless and automated user lifecycle management at a massive scale. We build and own the critical features that enterprises rely on to connect their identity sources to Auth0, including Enterprise APIs and our powerful self-service capabilities.</p>
<p>Our work is highly impactful, helping customers automate the creation, updating, and deactivation of users. This is a cornerstone for B2B SaaS applications that need to efficiently manage access for their own customers and partners. We work with NodeJS, TypeScript, PostgreSQL, MongoDB, and React to build these highly available and scalable services.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Help drive the architectural vision and strategy on the team to design and deliver powerful new enterprise APIs and functionality for our customers.</li>
</ul>
<ul>
<li>Orchestrate and lead major technical projects across teams as necessary.</li>
</ul>
<ul>
<li>Design, architect, code, and document large-scale distributed systems.</li>
</ul>
<ul>
<li>Serve as a subject matter expert on building scalable, reliable, and maintainable distributed systems.</li>
</ul>
<ul>
<li>Mentor and coach less experienced engineers on sound engineering practices and technical leadership.</li>
</ul>
<ul>
<li>Collaborate with Product, Security, and other engineering teams to define and continually improve our platform and architecture.</li>
</ul>
<ul>
<li>Drive technical decision-making while striving to hit the right balance between factors such as simplicity, flexibility, reliability, and performance.</li>
</ul>
<ul>
<li>Participate in the team&#39;s on-call rotations to make sure we offer our customers the best availability for our services.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>8+ years of experience working on large-scale systems or services.</li>
</ul>
<ul>
<li>Solid architectural and security knowledge, backed by experience in designing, implementing, and evolving complex distributed systems.</li>
</ul>
<ul>
<li>Experience working on projects that required close collaboration with external teams and have experience making those a success.</li>
</ul>
<ul>
<li>Solid previous experience with Node.js (JavaScript or TypeScript) to build scalable backend services and create and maintain public and internal APIs.</li>
</ul>
<ul>
<li>Experience building full-stack applications with an understanding of React.</li>
</ul>
<ul>
<li>Good understanding of SQL (PostgreSQL) and NoSQL (MongoDB) databases and how to optimise them for performance under load.</li>
</ul>
<ul>
<li>Experience with containerisation (Docker) and cloud environments like AWS and Azure.</li>
</ul>
<ul>
<li>Good mentor and communicator, and can explain complex concepts simply.</li>
</ul>
<p>#Hybrid</p>
<p>PID Number : P24578</p>
<p><strong>The Okta Experience</strong></p>
<ul>
<li>Supporting Your Well-Being</li>
</ul>
<ul>
<li>Driving Social Impact</li>
</ul>
<ul>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, TypeScript, PostgreSQL, MongoDB, React, Docker, AWS, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta makes access to applications safe, secure, and seamless for over 100 million daily logins worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7593555</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eef55d3d-bf0</externalid>
      <Title>Cloud Deployment Engineer, Space</Title>
      <Description><![CDATA[<p>Job Title: Cloud Deployment Engineer, Space</p>
<p>Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century&#39;s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built, and sold.</p>
<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>
<p><strong>ABOUT THE JOB</strong></p>
<p>SDANet and other programs are standing up Lattice stacks on AWS and Azure environments to integrate with mission partners. In this role, you will be responsible for researching, understanding, and planning the deployment strategy into classified government cloud infrastructure. You will design cloud networking and engineering solutions to meet security, cost, and performance requirements, and deploy Anduril software into government infrastructure, promoting it through various stages.</p>
<p>A significant part of your duties will involve identifying and triaging Kubernetes issues in the deployed environment, developing response and mitigation plans, and partnering with government platform management to address these issues effectively. You will be tasked with designing and implementing requirements for observability, alerting, and maintenance to ensure smooth operations.</p>
<p>Additionally, you will deliver and maintain accreditation artifacts and standards for the environments and systems you are responsible for. You will stand up and maintain representative environments at the unclassified level for testing and development purposes, and provide direct in-person expertise during mission-critical periods.</p>
<p>Ensuring the deployed system meets security and compliance requirements through regular updates and host OS patching will also be part of your responsibilities. Your role is crucial to maintaining the integrity and performance of the deployed infrastructure.</p>
<p><strong>REQUIRED QUALIFICATIONS</strong></p>
<ul>
<li>5+ years of working experience in DevOps or SRE type roles</li>
<li>Strongly proficient in utilizing cloud services like AWS, Azure, or Google Cloud Platform</li>
<li>Experience with IaC (Terraform, Cloudformation, Puppet, Ansible, etc)</li>
<li>Strong experience with containerization technologies such as Docker and orchestration tools like Kubernetes and Helm</li>
<li>Deep understanding of networking concepts, TCP/IP protocols, and security best practices</li>
<li>Programming ability in one or more of the general scripting languages (Python, Go, Bash, Rust, etc)</li>
<li>Strong problem-solving skills and the ability to work well under pressure</li>
<li>Excellent communication and collaboration skills to work effectively with cross-functional teams and develop internal roadmaps based on the needs of other teams</li>
<li>Experience deploying complex and scalable infrastructure solutions</li>
<li>Relevant certifications such as AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, or Google Cloud Certified Professional</li>
<li>Currently possesses and is able to maintain an active U.S. Secret security clearance</li>
<li>Eligible to obtain and maintain an active U.S. Top Secret security clearance</li>
</ul>
<p><strong>PREFERRED QUALIFICATIONS</strong></p>
<ul>
<li>Extensive expertise in Kubernetes and Helm</li>
<li>Hold a DoD 8570 IAT Level 1 or 2 certification</li>
<li>Cisco Certified Network Associate (CCNA)</li>
<li>Experience with government Cyber certification processes</li>
<li>Experience installing, sustaining, and troubleshooting data systems for DoD or otherwise sensitive customers</li>
<li>Familiarity with DoD-managed network enclaves (NIPR, SIPR, etc.)</li>
<li>Military service background (particularly with Space experience)</li>
</ul>
<p>US Salary Range $129,000-$171,000 USD</p>
<p>The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full-time offers; and are considered part of Anduril&#39;s total compensation package.</p>
<p>Additionally, Anduril offers top-tier benefits for full-time employees, including:</p>
<ul>
<li>Healthcare Benefits - US Roles: Comprehensive medical, dental, and vision plans at little to no cost to you.</li>
<li>UK &amp; AUS Roles: We cover full cost of medical insurance premiums for you and your dependents.</li>
<li>IE Roles: We offer an annual contribution toward your private health insurance for you and your dependents.</li>
<li>Income Protection: Anduril covers life and disability insurance for all employees.</li>
<li>Generous time off: Highly competitive PTO plans with a holiday hiatus in December.</li>
<li>Caregiver &amp; Wellness Leave is available to care for family members, bond with a new baby, or address your own medical needs.</li>
<li>Family Planning &amp; Parenting Support: Coverage for fertility treatments (e.g., IVF, preservation), adoption, and gestational carriers, along with resources to support you and your partner from planning to parenting.</li>
<li>Mental Health Resources: Access free mental health resources 24/7, including therapy and life coaching.</li>
<li>Additional work-life services, such as legal and financial support, are also available.</li>
<li>Professional Development: Annual reimbursement for professional development.</li>
<li>Commuter Benefits: Company-funded commuter benefits based on your region.</li>
<li>Relocation Assistance: Available depending on role eligibility.</li>
<li>Retirement Savings Plan - US Roles: Traditional 401(k), Roth, and after-tax (mega backdoor Roth) options.</li>
<li>UK &amp; IE Roles: Pension plan with employer match.</li>
<li>AUS Roles: Superannuation plan.</li>
</ul>
<p>The recruiter assigned to this role can share more information about the specific compensation and benefit details associated with this role during the hiring process.</p>
<p><strong>Protecting Yourself from Recruitment Scams</strong></p>
<p>Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We&#39;ve observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.</p>
<p>To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:</p>
<ul>
<li>No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>cloud services, AWS, Azure, Google Cloud Platform, IaC, Terraform, Cloudformation, Puppet, Ansible, containerization, Docker, Kubernetes, Helm, networking, TCP/IP, security best practices, scripting languages, Python, Go, Bash, Rust, problem-solving, communication, collaboration, infrastructure solutions, relevant certifications, AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, Google Cloud Certified Professional, U.S. Secret security clearance, U.S. Top Secret security clearance, extensive expertise in Kubernetes and Helm, DoD 8570 IAT Level 1 or 2 certification, Cisco Certified Network Associate, government Cyber certification processes, installing, sustaining, troubleshooting, familiarity with DoD-managed network enclaves, military service background</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5016027007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b9358113-c6a</externalid>
      <Title>Senior Solutions Engineer, Okta (Commercial Accounts)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>The Solutions Engineer Team</strong></p>
<p>We believe Solutions Engineers at Okta are involved in all stages of the customer&#39;s digital transformation. Solutions Engineers are experienced in using presentations, email, phone, and social media to connect with customers virtually and in person. We are looking for great teammates who can build and deliver sales presentations and customised product demonstrations to educate Okta&#39;s Customers (from developers to product managers to C-level executives) on best practices throughout their cloud security technology journey.</p>
<p>We believe Okta&#39;s Solutions Engineers empathise with Customers and quickly discern their true technical needs by asking detailed, clarifying questions while presenting solutions that specifically address those needs. Okta Solution Engineers have the rare combination of technical acumen and business insight; in a career where you can utilise both.</p>
<p>As a Solutions Engineer at Okta, you will further develop each of these skills by advising a diverse set of customers on the value they will gain by using Okta&#39;s Identity Platform.</p>
<p><strong>The Solutions Engineer Opportunity</strong></p>
<p>Reporting to the Senior Manager, Solutions Engineers, this Senior Solutions Engineer (Okta) will partner with Commercial Account Executives. As a Senior Solutions Engineer at Okta, you will be the technical lead in all stages of the customer lifecycle. You will focus on delivering customer value by aligning customer requirements to business results enabled by Okta solutions. You will contribute to account and opportunity strategies and, collaboratively, demonstrate the value of Okta solutions through presentations, whiteboard sessions, and proof-of-concept demonstrations.</p>
<p><strong>What You&#39;ll Do</strong></p>
<p>As a Senior Solutions Engineer, you&#39;ll be a strategic technical expert for a customer-facing sales team. You&#39;ll use your skills to:</p>
<ul>
<li>Work alongside the Corporate/Expansion (employee size 300-1249) sales team as the technical and domain expert of a customer-facing sales team to help Customers understand the value of Okta&#39;s solutions</li>
</ul>
<ul>
<li>Serve as a technical advisor: Partner with the sales team to educate customers on Okta&#39;s identity solutions and demonstrate their value.</li>
</ul>
<ul>
<li>Solve customer challenges: Understand customer needs and provide tailored product demonstrations to show how our solutions can solve their business problems.</li>
</ul>
<ul>
<li>Lead technical engagements: Answer product and technical questions, and plan and deliver complex Proofs of Concept (POCs) by collaborating with other Okta engineering teams.</li>
</ul>
<ul>
<li>Drive product and knowledge growth: Share customer feedback with our product teams to influence future enhancements and contribute to the team&#39;s knowledge by sharing best practices and reusable assets.</li>
</ul>
<ul>
<li>Stay ahead of the curve: Keep up with competitive analysis and market differentiation to better position Okta.</li>
</ul>
<ul>
<li>Support company events: Represent Okta at marketing events, including conferences, user groups, and trade shows.</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>Experience: 8+ years of experience in pre-sales engineering and solution selling, with a strong background in Enterprise account segments.</li>
</ul>
<ul>
<li>Technical Acumen:</li>
</ul>
<ul>
<li>Deep understanding of identity protocols (e.g., SAML, OIDC, OAuth, FIDO, Passkeys, SCIM, LDAP).</li>
</ul>
<ul>
<li>Working knowledge of cloud platforms like AWS, Azure/Entra, and GCP.</li>
</ul>
<ul>
<li>Experience with REST APIs and SDKs.</li>
</ul>
<ul>
<li>Hands-on experience in one or more of the following: front-end web development, back-end development, scripting (Bash, PowerShell), or DevOps (Docker, Kubernetes).</li>
</ul>
<ul>
<li>Understanding of identity-related cybersecurity topics (e.g., phishing, MFA bypass attacks, privilege escalation).</li>
</ul>
<ul>
<li>Communication &amp; Presentation Skills: The ability to simplify complex technical concepts and deliver compelling presentations to diverse audiences, from developers to C-level executives. You should be skilled at diagramming user journeys and complex architecture.</li>
</ul>
<ul>
<li>Strategic Mindset: Proven skills in territory management, including building pipelines and collaborating with sales counterparts.</li>
</ul>
<ul>
<li>Travel: Ability to travel up to 25% of the time.</li>
</ul>
<ul>
<li>A bachelor&#39;s degree in Engineering, Computer Science, MIS, or a comparable field is preferred.</li>
</ul>
<p><strong>Bonus Skills (Ideally, You Have)</strong></p>
<ul>
<li>Experience in the IAM space</li>
</ul>
<ul>
<li>Hands-on knowledge of Identity Governance (IGA) or Privileged Access Management (PAM) solutions.</li>
</ul>
<ul>
<li>Practical experience with Windows Server, Active Directory, LDAP, and Federation services.</li>
</ul>
<p><strong>You might also have (not mandatory):</strong></p>
<ul>
<li>AI &amp; Emerging Tech: Expertise with Agentic AI and LLM-based workflows. You understand the use cases for AI agents, both internal and external, and have familiarity with MCP servers.</li>
</ul>
<p>#LI-Remote #LI-CM</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,000-$275,000 USD</Salaryrange>
      <Skills>pre-sales engineering, solution selling, Enterprise account segments, identity protocols, SAML, OIDC, OAuth, FIDO, Passkeys, SCIM, LDAP, cloud platforms, AWS, Azure/Entra, GCP, REST APIs, SDKs, front-end web development, back-end development, scripting, DevOps, Docker, Kubernetes, identity-related cybersecurity topics, phishing, MFA bypass attacks, privilege escalation, communication, presentation, diagramming user journeys, complex architecture, IAM space, Identity Governance, Privileged Access Management, Windows Server, Active Directory, Federation services, Agentic AI, LLM-based workflows, AI agents, MCP servers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions. It is a large-scale company.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7738770</Applyto>
      <Location>Georgia; New York, New York; North Carolina; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b36d00b1-459</externalid>
      <Title>Staff Database Reliability Engineer (DBRE), Mysql, Federal</Title>
      <Description><![CDATA[<p>We are seeking a Staff Database Reliability Engineer (DBRE) to join our team. As a DBRE, you will have ownership of all technical aspects of our data services tier from ground up. You will partner with our core product engineers, performance engineers, site reliability engineers, and growing DBRE team, working on scaling, securing, and tuning our infrastructure be it self-managed MySQL, RDS Aurora MySQL/PostgreSQL or CloudSQL MySQL/PostgreSQL.  Our team is committed to two Okta Engineering mantras &quot;Always On&quot; and &quot;No Mysteries&quot;. You will ensure effective performance and 24X7 availability of the production database tier, design, implement and document operational processes, tasks, and configuration management. You will also coordinate efforts towards performance tuning, scaling and benchmarking the data services infrastructure.  You will contribute to configuration management using chef and infrastructure as code using terraform. You will conduct thorough performance analysis and tuning to meet application SLAs, optimizing database schema, indexes, and SQL queries. Quickly troubleshoot and resolve database performance issues.  Required Skills:  <em> Proven experience as a MySQL DBRE </em> In-depth knowledge of MySQL internals, performance tuning, and query optimization <em> Experience in database design, implementation, and maintenance in a high-availability environment </em> Strong proficiency in SQL and familiarity with scripting <em> Familiarity with database monitoring tools (e.g, Grafana) </em> Solid understanding of database security practices and compliance requirements <em> Ability to troubleshoot and resolve database performance issues and outages promptly </em> Excellent communication skills and ability to work effectively in a team environment <em> Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience)  Preferred Skills:  </em> AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management <em> Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management </em> Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability <em> Proficient in a Linux environment, including Linux internals and tuning </em> Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment  This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire. Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$162,000-$244,000 USD</Salaryrange>
      <Skills>Proven experience as a MySQL DBRE, In-depth knowledge of MySQL internals, performance tuning, and query optimization, Experience in database design, implementation, and maintenance in a high-availability environment, Strong proficiency in SQL and familiarity with scripting, Familiarity with database monitoring tools (e.g, Grafana), Solid understanding of database security practices and compliance requirements, Ability to troubleshoot and resolve database performance issues and outages promptly, Excellent communication skills and ability to work effectively in a team environment, Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience), AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management, Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management, Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability, Proficient in a Linux environment, including Linux internals and tuning, Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7670281</Applyto>
      <Location>Bellevue, Washington; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc0c258f-1f6</externalid>
      <Title>Engineering Manager II, Enterprise AI Solutions</Title>
      <Description><![CDATA[<p>We are seeking a Business Savvy Engineering Manager to help define the future of Corporate IT&#39;s AI-based future at Pinterest. Working closely with cross-functional engineering teams and business leaders, you will lead a nimble team playing a pivotal role in scaling Corporate IT&#39;s engineering department.</p>
<p>As an Engineering Manager, you will guide your team in designing and building the solutions that make our business partners&#39; jobs easier, faster, and more capable. You will grow and empower engineers while shaping how we build Pinterest&#39;s AI future.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead a team of employees and contractors focused on solving business problems using AI tools.</li>
<li>Work closely with the existing software engineering teams to develop a seamless and low-friction client experience.</li>
<li>Mentor junior engineers to help them grow and develop into the best that they can be.</li>
<li>Motivate and lead your team to show up every day and do their best work.</li>
<li>Collaborate with stakeholders and partner teams across the organization to architect data lake storage and metadata management technologies to unlock big data and ML/AI innovations.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years of experience leading and growing engineering teams, with a strong hands-on background in Python.</li>
<li>7+ years of industry experience designing, building, and operating scalable, highly available backend systems, including owning production-grade infrastructure at scale.</li>
<li>Proficiency in designing and delivering AI-based solutions that solve real-world business problems.</li>
<li>Understanding of business unit challenges and problems, focused on Finance, Accounting, Legal, Sales, and Marketing.</li>
<li>Experience with cloud infrastructure on AWS and containerized services using Docker and Kubernetes.</li>
<li>Demonstrated technical leadership and people management experience, including setting team vision and long-term roadmap, mentoring and growing engineers across all levels, driving day-to-day execution and engineering alignment, and partnering cross-functionally to deliver complex, high-impact platform investments.</li>
<li>Demonstrated ability to use AI to accelerate team execution, system design, and decision-making, paired with sound judgment in validating outputs, maintaining quality, and taking ownership of final outcomes.</li>
<li>Build storage capabilities that efficiently support large-scale ML/AI workloads, including high-throughput data access, schema evolution, and large-scale column backfills.</li>
<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs.</li>
<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables.</li>
</ul>
<p>In-Office Requirement Statement:</p>
<ul>
<li>We let the type of work you do guide the collaboration style. That means we&#39;re not always working in an office, but we continue to gather for key moments of collaboration and connection.</li>
<li>This role will need to be in the office for in-person collaboration 1-2 times/quarter, and therefore can be situated anywhere in the country.</li>
</ul>
<p>Relocation Statement:</p>
<ul>
<li>This position is not eligible for relocation assistance.</li>
</ul>
<p>At Pinterest, we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$177,185-$364,795 USD</Salaryrange>
      <Skills>Python, AI, Cloud infrastructure, Containerized services, Docker, Kubernetes, Data lake storage, Metadata management, Big data, ML/AI innovations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7494960</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0787994a-b99</externalid>
      <Title>Senior Cloud Deployment Engineer, Space</Title>
      <Description><![CDATA[<p>Anduril Industries is seeking a Senior Cloud Deployment Engineer to join their Space team. The successful candidate will be responsible for researching, understanding, and planning the deployment strategy into classified government cloud infrastructure. They will design cloud networking and engineering solutions to meet security, cost, and performance requirements, and deploy Anduril software into government infrastructure, promoting it through various stages.</p>
<p>A significant part of the duties will involve identifying and triaging Kubernetes issues in the deployed environment, developing response and mitigation plans, and partnering with government platform management to address these issues effectively. The engineer will also be tasked with designing and implementing requirements for observability, alerting, and maintenance to ensure smooth operations.</p>
<p>The role requires 8+ years of working experience in DevOps or SRE type roles, with strong proficiency in utilizing cloud services like AWS, Azure, or Google Cloud Platform. Experience with IaC (Terraform, Cloudformation, Puppet, Ansible, etc) and containerization technologies such as Docker and orchestration tools like Kubernetes and Helm is also required.</p>
<p>The salary range for this role is $166,000-$220,000 USD per year, with highly competitive equity grants included in the majority of full-time offers. Anduril offers top-tier benefits for full-time employees, including comprehensive medical, dental, and vision plans, income protection, generous time off, and family planning and parenting support.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>AWS, Azure, Google Cloud Platform, IaC, Kubernetes, Helm, Docker, Terraform, Cloudformation, Puppet, Ansible</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that aims to transform U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5032429007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1dc80c94-f57</externalid>
      <Title>Senior Software Engineer (Money)</Title>
      <Description><![CDATA[<p>At Databricks, we are seeking a Senior Software Engineer to join our Money team in Bengaluru, India. As one of the first engineers for Money at Databricks India, you will be key to building a base for one of Databricks&#39; most central engineering teams.</p>
<p>You will own critical components that form the backbone of our products, starting with Databricks&#39; resource admission control and usage governance infrastructure. Your role is crucial in helping bring diverse business needs together, including abuse prevention, product commercialization motions, and reliable product availability at scale.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning Money systems and services that govern usage of all Databricks products and offerings.</li>
<li>Enhancing engineering and infrastructure efficiency, reliability, accuracy, and response times, including CI/CD processes, test frameworks, data quality assurance, end-to-end reconciliation, and anomaly detection.</li>
<li>Collaborating with platform and product teams to develop and implement innovative infrastructure that scales to meet evolving needs.</li>
<li>Contributing to long-term vision and requirements development for Databricks products, in partnership with our engineering teams.</li>
</ul>
<p>We are looking for a candidate with a strong background in software engineering, preferably in Java, Scala, C++, or similar languages. You should have 7+ years of production-level experience and a proven track record in architecting, developing, deploying, and operating components of large-scale distributed systems.</p>
<p>If you are passionate about delivering high-quality solutions and have a proactive approach, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, C++, Software Security, Cloud Technologies, AWS, Azure, GCP, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7654347002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fff47210-64d</externalid>
      <Title>Senior Software Engineer, Applied AI (Fullstack)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Okta&#39;s Business Technology organisation builds secure and intelligent internal platforms that power our global workforce. Our AI &amp; Automation team is delivering next-generation tools and experiences by integrating GenAI and intelligent automation into workflows across IT, HR, Finance, Sales, Marketing and Customer Support.</p>
<p>We focus on real-world applications: virtual agents, AI copilots, internal RAG services, and AI-augmented self-service portals , all with scale, governance, and user experience in mind.</p>
<p><strong>The Opportunity</strong></p>
<p>As a Senior Software Engineer, Applied AI, you&#39;ll play a key role in building user-facing and backend systems that leverage GenAI to improve internal experiences and operations. This role requires strong full-stack engineering skills, with an emphasis on both AI integration and building intuitive, performant UIs that make AI accessible and useful to our internal customers.</p>
<p>You&#39;ll work closely with software engineers, product managers, and designers to build secure, intelligent tools for employees across Okta.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Design and build end-to-end GenAI-powered applications, including web-based UIs, API services, and backend orchestration.</li>
</ul>
<ul>
<li>Implement and integrate LLM-based experiences using frameworks like LangChain, LlamaIndex, and tools like OpenAI, Claude, or Gemini.</li>
</ul>
<ul>
<li>Define, implement, and champion operational excellence standards (SLOs, observability, incident response frameworks) for all services deployed.</li>
</ul>
<ul>
<li>Develop responsive, accessible, and modern frontend interfaces using frameworks like React or Vue , with a focus on usability, performance, and trust in AI outputs.</li>
</ul>
<ul>
<li>Build and maintain a library of reusable frontend components and hooks that allow other business delivery teams to easily &#39;drop in&#39; GenAI capabilities into their own applications.</li>
</ul>
<ul>
<li>Build and maintain retrieval-augmented generation (RAG) pipelines with vector search and embedding strategies (e.g., Pinecone, FAISS, Qdrant).</li>
</ul>
<ul>
<li>Collaborate with designers and product managers to rapidly iterate on UX patterns for AI-powered experiences (e.g., prompt inputs, citations, summaries).</li>
</ul>
<ul>
<li>Ensure security, privacy, observability, and test coverage across the full stack.</li>
</ul>
<ul>
<li>Contribute to architecture decisions, engineering standards, and best practices for AI/automation systems.</li>
</ul>
<ul>
<li>Partner with platform and infrastructure teams to ensure services scale reliably across the org.</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>5–8 years of software engineering experience with full-stack development, including 2+ years of building AI/ML-driven applications.</li>
</ul>
<ul>
<li>Strong Python development skills and 5+ years experience building cloud-based services using AWS, Docker, and RESTful APIs.</li>
</ul>
<ul>
<li>2+ years of experience in frontend technologies like React, TypeScript, or Vue, and comfort working on UI/UX for internal tools or enterprise applications.</li>
</ul>
<ul>
<li>Hands-on experience with LLM integration, RAG pipelines, prompt engineering, or orchestration frameworks like LangChain or LlamaIndex.</li>
</ul>
<ul>
<li>Strong background in distributed systems, APIs, microservices, container orchestration (ECS/EKS), and cloud platforms (AWS/GCP/Azure).</li>
</ul>
<ul>
<li>Familiarity with secure coding, authentication/authorisation, and internal data governance best practices.</li>
</ul>
<ul>
<li>Ability to collaborate across engineering, design, and product teams , with a strong sense of user empathy and technical ownership.</li>
</ul>
<ul>
<li>Bonus: Exposure to design systems, AI evaluation tooling, or real-time application performance monitoring.</li>
</ul>
<p><strong>Why Join Okta</strong></p>
<ul>
<li>Make AI Real: Design and build AI-powered apps used daily by Okta employees.</li>
</ul>
<ul>
<li>Full-Stack Challenge: Tackle end-to-end problems , from LLM orchestration to intuitive UIs.</li>
</ul>
<ul>
<li>Trusted Innovation: Join a team committed to security, ethics, and technical excellence in AI.</li>
</ul>
<p>#LI-MK1</p>
<p>#LI-hybrid</p>
<p>P24739_3355024</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000-$247,000 USD</Salaryrange>
      <Skills>Python, AWS, Docker, RESTful APIs, React, TypeScript, Vue, LLM integration, RAG pipelines, prompt engineering, orchestration frameworks, distributed systems, APIs, microservices, container orchestration, cloud platforms, design systems, AI evaluation tooling, real-time application performance monitoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions. It has a large global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7589781</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6c05140a-b31</externalid>
      <Title>Senior Software Engineer, Actions (Auth0)</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our high-calibre Extensibility Engineering team to help us continue to improve our ultra-low latency, secure, and scalable platform for untrusted code execution.</p>
<p>In this role, you will have the opportunity to significantly contribute to the foundation of Auth0&#39;s Ecosystem, realising a huge impact for our customers and partners.</p>
<p>As a member of Developer Experience - Extensibility Platform, you will:</p>
<ul>
<li>Design, architect, and document large-scale distributed systems.</li>
<li>Implement features across different layers of the stack using technologies such as Go, MongoDB, PostgreSQL, AWS, Azure, and Kubernetes.</li>
<li>Lead team discussions, mentor other engineers to become senior and improve the team’s productivity.</li>
<li>Contribute to improving Auth0&#39;s architecture, performance, observability, security controls, and best practices.</li>
<li>Collaborate with Product and Security teams to define and continually improve Auth0’s Extensibility platform and architecture.</li>
<li>Participate in our on-call rotations for troubleshooting production issues.</li>
</ul>
<p>Key Qualifications:</p>
<ul>
<li>5+ years of experience in software development, building distributed systems using Go.</li>
<li>Strong experience in API-driven applications using REST and/or gRPC.</li>
<li>Experience with packaging and distributing containerized applications using Docker and Kubernetes.</li>
<li>Experience with sandboxing untrusted code or tenant isolation (both preferred but not required).</li>
<li>A high bar for both code quality as well as quality of user experience.</li>
<li>Proven ability to collaborate with others to drive initiatives forward.</li>
</ul>
<p>Nice To Haves:</p>
<ul>
<li>Solid hands-on experience with Node.js in building scalable backend services</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$136,000-$187,000 CAD</Salaryrange>
      <Skills>Go, MongoDB, PostgreSQL, AWS, Azure, Kubernetes, API-driven applications, REST, gRPC, Docker, containerized applications, Node.js, scalable backend services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a company that specialises in authentication and authorization platforms.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7743622</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b271dfc9-021</externalid>
      <Title>Staff Software Engineer- Fullstack (Workflows)</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Staff Full-Stack Engineer to join the Integration Builder team within Okta Workflows. This team owns the core no-code surface that enables both internal teams and third-party developers (ISVs) to build powerful integrations and automation experiences with ease.</p>
<p>As a Staff Engineer, you&#39;ll lead initiatives that span front-end and back-end services , delivering performant, secure, and scalable features. You&#39;ll help define architecture, drive implementation, and collaborate closely with Design, PM, and Platform teams. You&#39;ll also work directly with our technical architects to help shape what we build , and how we build it.</p>
<p>This is a high-impact role in a growing, strategic product area with strong executive visibility.</p>
<p>Role Details:</p>
<ul>
<li>Design, build, and maintain end-to-end features using modern JavaScript and cloud-native technologies (React, Node.js, TypeScript, PostgreSQL).</li>
<li>Lead technical design for key initiatives, driving quality, scalability, and maintainability.</li>
<li>Build reusable and performant UI components for a best-in-class no-code builder experience.</li>
<li>Own services throughout their lifecycle , including implementation, testing, deployment, observability, and incident response.</li>
<li>Work closely with Product, Design, and Architecture to define the “what” and “how” of features, ensuring solutions are both user-friendly and technically sound.</li>
<li>Partner with infrastructure and platform teams to optimize system performance and reliability</li>
<li>Mentor and support engineers across the team, fostering a culture of quality, ownership, and continuous improvement.</li>
<li>Contribute to cross-functional planning, architectural reviews, and team-wide engineering practices.</li>
</ul>
<p>Experience:</p>
<ul>
<li>6+ years of experience building modern web applications in a full-stack environment.</li>
<li>Deep expertise in TypeScript, ReactJS, and Node.js (Express or similar frameworks).</li>
<li>Experience designing APIs, working with relational databases (PostgreSQL or similar), and building services in a distributed, cloud-based architecture.</li>
<li>A strong product mindset , you work well with Product and Design and care about delivering intuitive and elegant user experiences.</li>
<li>Ability to collaborate closely with Architects to make smart technical tradeoffs, and drive alignment across teams.</li>
<li>Passion for craftsmanship and high engineering standards (testing, monitoring, documentation, scalability).</li>
<li>Excellent communication skills, with the ability to lead technical discussions and build consensus across functions.</li>
<li>A growth mindset and interest in mentoring others and upleveling the team.</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Experience with PostgreSQL, Docker, and Kubernetes.</li>
<li>Exposure to low-code/no-code tools, workflow engines, or visual development platforms.</li>
<li>Interest in AI-assisted developer tooling or automation.</li>
</ul>
<p>Education and Training:</p>
<ul>
<li>Bachelor&#39;s in Computer Science, or relevant industry experience</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$154,000-$230,000 CAD</Salaryrange>
      <Skills>TypeScript, ReactJS, Node.js, PostgreSQL, JavaScript, Cloud-native technologies, APIs, Relational databases, Distributed, cloud-based architecture, Docker, Kubernetes, Low-code/no-code tools, Workflow engines, Visual development platforms, AI-assisted developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta builds the trusted, neutral infrastructure that enables organisations to safely embrace the new era of AI.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7087237</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f38b4fcf-88f</externalid>
      <Title>Staff Software Engineer, Organization</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our Organizations team. As a Staff Software Engineer, you will help drive architectural vision and strategy on the team to design and deliver powerful new enterprise functionality for our SaaS customers. You will identify and implement strategic technical improvements to our codebase and architecture, orchestrate and lead major technical projects, and mentor and coach less experienced engineers on sound engineering practices and technical leadership.</p>
<p>You will work closely with the Product Manager and Product Designer to define the look, feel, and functionality of new features and review customer feedback. You will also serve as a subject matter expert on all building scalable, reliable, and maintainable distributed systems.</p>
<p>To be successful in this role, you will need to have solid architectural and security knowledge, backed by experience in designing, implementing, and evolving complex distributed systems. You will also need to have worked on projects that required close collaboration with external teams and have experience making those a success.</p>
<p>You will be a good mentor and communicator, able to explain complex concepts simply in person or in a document. You will know that while an engineer can write code, teams collaborate to ship successful products.</p>
<p>You will have solid previous experience with Node.js (JavaScript or Typescript) to build scalable backend services and creating and maintaining public and internal APIs. You will also have built frontend and full-stack apps and know what approach to use when.</p>
<p>You will have a good understanding of SQL databases and know how to debug and optimize table and query structure for performance under load. You will also have experience with Docker and cloud environments (AWS and Azure preferred).</p>
<p>Bonus points for experience with Kubernetes, knowledge of authentication protocols such as OAuth2, OIDC, SAML, understanding of event-driven architectures, especially Apache Kafka, understanding and experience of DevOps culture, and knowledge of security engineering and application security.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>€74.000-€102.000 EUR</Salaryrange>
      <Skills>Node.js, JavaScript, Typescript, SQL databases, Docker, cloud environments, AWS, Azure, Kubernetes, authentication protocols, OAuth2, OIDC, SAML, event-driven architectures, Apache Kafka, DevOps culture, security engineering, application security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7560775</Applyto>
      <Location>Barcelona, Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f516f0ef-a2d</externalid>
      <Title>Senior Site Reliability Engineer (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission.</p>
<p>As a Senior Site Reliability Engineer, you&#39;ll join our SRE team based in Europe to ensure our production systems are not only operational but also resilient, scalable, and ready for exponential growth. This isn&#39;t just about keeping the lights on; it&#39;s about directly contributing to the platform&#39;s core resiliency and robustness.</p>
<p>You&#39;ll be a hands-on builder, crafting solutions that make our system more reliable by design.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Design and build custom software in Go to enhance the platform&#39;s reliability, resiliency, and redundancy.</li>
<li>Partner with engineering teams to embed reliability principles, improving the availability, performance, and observability of our services.</li>
<li>Use your deep understanding of infrastructure and observability principles to identify opportunities for improvement within the product and implement solutions.</li>
<li>Contribute to our on-call rotation, providing rapid, effective response to critical incidents and using your expertise to troubleshoot, mitigate or accurately escalate production issues.</li>
<li>Develop and refine our SRE tooling and processes, focusing on automation and operational efficiency.</li>
<li>Define, document, and champion reliability best practices across the organisation.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>A proactive and systematic approach to problem-solving, with a high degree of ownership.</li>
<li>Proven experience in a production environment supporting large-scale, mission-critical applications with a high degree of autonomy.</li>
<li>Proficiency in at least one programming language, with a preference for Go. You should be comfortable writing custom applications, not just scripts.</li>
<li>Experience with infrastructure as code (Terraform), container orchestration (Kubernetes, Docker) and GitOps (ArgoCD).</li>
<li>Demonstrable expertise in a major cloud provider (Azure, AWS, or GCP).</li>
<li>A strong grasp of microservices architecture, databases (SQL, NoSQL), and networking fundamentals, so you can understand how custom code can solve platform-level issues.</li>
<li>An understanding of core SRE principles, including SLIs, SLOs, and error budgets.</li>
<li>Experience in an on-call rotation for a 24/7 cloud-based environment.</li>
<li>Exceptional communication and collaboration skills, with a proven ability to work effectively in a remote, distributed team, where tasks may be self-driven.</li>
</ul>
<p>We&#39;re looking for someone who is not just looking for a job, but a career-defining opportunity to tackle complex challenges at a massive scale. If you&#39;re a curious and motivated engineer who&#39;s passionate about building reliability directly into the platform, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$136,000-$187,000 CAD</Salaryrange>
      <Skills>Go, Terraform, Kubernetes, Docker, GitOps, Cloud provider (Azure, AWS, or GCP), Microservices architecture, Databases (SQL, NoSQL), Networking fundamentals, Core SRE principles (SLIs, SLOs, error budgets)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides an unparalleled authentication experience for hundreds of millions of users worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7791590</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>782a1c68-325</externalid>
      <Title>Senior DevOps Engineer</Title>
      <Description><![CDATA[<p>At ZoomInfo, we&#39;re looking for a Senior DevOps Engineer to join our Infrastructure Engineering group. As a Senior DevOps Engineer, you will be responsible for innovation in infrastructure and automation for ZoomInfo Engineering. You will have a strong background in modern infrastructure, with a thorough understanding of industry best practices. You will have a high level of comfort participating in challenging technical discussions and advocating for best practices in a high-paced environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Thorough, clear, concise documentation of new and existing standards, procedures, and automated workflows</li>
<li>Championing of best practices and standards around infrastructure configuration and management</li>
<li>Experience in creating internal products and managing their software development lifecycle</li>
<li>Deployment, configuration, and management of infrastructure via infrastructure as code</li>
<li>Working hands on with cloud infrastructure (AWS, Azure, and GCP)</li>
<li>Working hands on with container infrastructure (Docker, Kubernetes, ECS, EKS, GKE, GAE, etc.)</li>
<li>Configuration and management of Linux based tools and third-party cloud services</li>
<li>Continuous improvement of our infrastructure, ensuring that it is highly available and observable</li>
</ul>
<p>Minimum Requirements:</p>
<ul>
<li>Solid foundation of experience managing Linux systems in virtual environments (6+ years)</li>
<li>Deploying and maintaining highly available infrastructure in one or more Cloud providers (5+ years, AWS or GCP preferred)</li>
<li>Infrastructure as code using Terraform (4+ years)</li>
<li>Creating, deploying, maintaining, and troubleshooting Docker images (4+ years)</li>
<li>Scoping, deploying, maintaining and troubleshooting Kubernetes clusters (4+ years)</li>
<li>Developing and maintaining an active codebase in Go, Python preferably (3+ years)</li>
<li>Experience with PaaS technologies (5+ years, EKS and GKE preferred)</li>
<li>Maintaining monitoring and observability tools (Datadog, Prometheus preferred)</li>
<li>Thorough understanding of network infrastructure and concepts (VPNs, routers and routing protocols, TCP/IP, IPv4 and v6, UDP, OSI layers, etc.)</li>
<li>Experience with load balancing and proxy technologies (Istio, Nginx, HAProxy, Apache, Cloud load balancers, etc.)</li>
<li>Debugging and troubleshooting complex problems in cloud-native infrastructure.</li>
<li>Slack native mentality.</li>
<li>Bachelor’s Degree in Computer Science or a related technical discipline, or the equivalent combination of education, technical certifications, training, or work experience.</li>
</ul>
<p>Abilities Required:</p>
<ul>
<li>Demonstrated ability to learn new technologies quickly and independently</li>
<li>Strong technical, organizational and interpersonal skills</li>
<li>Strong written and verbal communication skills</li>
<li>Must be able to read, understand, and communicate complex problems and solutions in English over a textual medium (such as Slack)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Linux, Cloud infrastructure (AWS, Azure, GCP), Container infrastructure (Docker, Kubernetes, ECS, EKS, GKE, GAE), Infrastructure as code (Terraform), Go, Python, PaaS technologies (EKS, GKE), Monitoring and observability tools (Datadog, Prometheus)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a technology company that provides a go-to-market intelligence platform for businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8287254002</Applyto>
      <Location>Ra&apos;anana, Israel</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4860f898-cae</externalid>
      <Title>Manager, Infrastructure Security (USA)</Title>
      <Description><![CDATA[<p>As a Manager on the Infrastructure Security Team within the Product Security Department, you will work with teams across GitLab to ensure that the components comprising our cloud infrastructure are built with the resiliency and security expectations that our customers depend on to power their software factories.</p>
<p>You&#39;ll lead and develop a high-performing team focused on securing GitLab&#39;s internal cloud infrastructure (e.g. internal tooling and Sandbox) and our FedRAMP-authorized SaaS offering, GitLab Dedicated for Government.</p>
<p>You&#39;ll redefine the benchmark for Infrastructure Security through relentless advocacy of our Core Values and Dogfooding.</p>
<p>You&#39;ll maintain strong partnerships with peers across GitLab (e.g. Infrastructure, Finance, Product, and Legal) to ensure that the team can contribute effectively to cross-functional initiatives, building security in from the foundation upward.</p>
<p>When required, you&#39;ll leverage your extensive infrastructure experience and conflict resolution skills to unblock decisions.</p>
<p>You&#39;ll collaborate with the Product Security Leadership to develop and refine the Infrastructure Security vision and strategic roadmap.</p>
<p>Key Responsibilities:</p>
<p>Contribute to the Infrastructure Security team&#39;s vision and strategic roadmap Serve as a stable counterpart to teams such as Public Sector SRE, providing infrastructure security guidance and partnership Provide professional guidance and input on infrastructure security within and outside of your team Collaborate with other security teams in support of cross-team security efforts, process improvements, and driving down risk across the organization Build collaborative cross-functional partnerships with teams across Infrastructure Engineering, Engineering and Development, Product Management, and Legal Manage an existing high-performing team of infrastructure security professionals and hire new members as appropriate Lead and mentor your team by helping grow their skills and experience, fostering a culture of continuous improvement, holding regular 1:1s, and being your team&#39;s role model in exemplifying GitLab company values Establish and implement security policies, procedures, standards, and guidelines in support of infrastructure security Fulfill the Product Security Division Mission of securing GitLab Infrastructure with our own product (&#39;dogfooding&#39;)</p>
<p>Requirements:</p>
<p>Hands-on public cloud security experience (GCP or AWS), ideally with SRE background Practitioner-level CI/CD, Docker, Kubernetes, cloud-native, and serverless experience Track record of leading and implementing infrastructure automation in service of security (e.g. Chef, Ansible, Terraform) Experience managing infrastructure security in regulated environments (e.g. FedRAMP, PCI) Solid grasp of the current threat landscape, distributed architectures, infrastructure-level systems design, and threat modeling Strong written, verbal, and presentation skills across a range of stakeholders Comfortable operating in a remote, async, distributed environment with ambiguity and shifting priorities Experience managing and developing teams of 5+ Alignment with GitLab&#39;s values and Leadership at GitLab manager responsibilities</p>
<p>Due to government requirements, you must be a United States Citizen (defined as any individual who is a citizen of the United States by law, birth, or naturalization) to fill this position. The base salary range for this role&#39;s listed level is currently for residents of the United States only. This range is intended to reflect the role&#39;s base salary rate in locations throughout the US. Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, alignment with market data, and geographic location. The base salary range does not include any bonuses, equity, or benefits. See more information on our benefits and equity. Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$140,000-$245,000 USD</Salaryrange>
      <Skills>public cloud security, CI/CD, Docker, Kubernetes, cloud-native, serverless, infrastructure automation, security policies, procedures, standards, guidelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8468166002</Applyto>
      <Location>Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f3f1713-f74</externalid>
      <Title>Systems Reliability Engineer</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code.</p>
<p>As a Systems Reliability Engineer on one of our Production Engineering teams, you&#39;ll be building the tools to help engineers deploy and operate the services that make Cloudflare work. Our mission is to provide a reliable, yet flexible, platform to help product teams release new software efficiently and safely.</p>
<p>Core platforms we operate at Cloudflare include:</p>
<ul>
<li>Kubernetes</li>
<li>Kafka</li>
<li>Developer tools, CI, and CD systems</li>
<li>Vault, Consul</li>
<li>Terraform</li>
<li>Temporal Workflows</li>
<li>Cloudflare Developer Platform</li>
</ul>
<p>Responsibilities</p>
<ul>
<li>Build software that automates the operation of large, highly-available distributed systems.</li>
<li>Ensure platform security, and guide security best practices</li>
<li>Document your work and guide fellow developers towards optimal solutions</li>
<li>Contribute back to the open source community</li>
<li>Leave code better than we found it</li>
</ul>
<p>Requirements</p>
<ul>
<li>Recent career experience with Go or Python and at least 3 years experience in the role of full-time software engineer (any language). Rust is an added bonus.</li>
<li>Experience with deploying and managing services using Docker on Linux</li>
<li>A firm grasp of IP networking, load balancing and DNS</li>
<li>Excellent debugging skills in a distributed systems environment</li>
<li>Source control experience including branching, merging and rebasing (we use git)</li>
<li>The ability to break down complex problems and drive towards a solution</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience with Deployment, StatefulSets, Persistent Volumes Claims, Ingresses, CRDs on Kubernetes</li>
<li>Operational experience deploying and managing large systems on bare metal</li>
<li>Experience as a Site Reliability Engineer (SRE) for a large-scale company</li>
<li>You have practical knowledge of web and systems performance, and extensively used tracing tools like ebpf and strace.</li>
<li>Alerting and monitoring (Prometheus/Alert Manager), Configuration Management (salt)</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Docker, Linux, IP networking, load balancing, DNS, source control, git, Kubernetes, Kafka, Vault, Consul, Terraform, Temporal Workflows, Cloudflare Developer Platform, Rust, Deployment, StatefulSets, Persistent Volumes Claims, Ingresses, CRDs, ebpf, strace, Prometheus, Alert Manager, salt</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that operates one of the world&apos;s largest networks, powering millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7453074</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b267407d-022</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Staff Full-Stack Engineer to join the Flow builder team within Okta Workflows. This team owns the core no-code canvas that enables both internal teams and our customers to build powerful automation experiences with ease.</p>
<p>As a Staff Engineer, you&#39;ll lead initiatives that span front-end and back-end services , delivering performant, secure, and scalable features. You&#39;ll help define architecture, drive implementation, and collaborate closely with Design, PM, and Platform teams. You&#39;ll also work directly with our technical architects to help shape what we build , and how we build it.</p>
<p>This is a high-impact role in a growing, strategic product area with strong executive visibility.</p>
<p>Role Details:</p>
<ul>
<li>Design, build, and maintain end-to-end features using modern JavaScript and cloud-native technologies (React, Node.js, TypeScript, PostgreSQL).</li>
<li>Lead technical design for key initiatives, driving quality, scalability, and maintainability.</li>
<li>Build reusable and performant UI components for a best-in-class no-code builder experience.</li>
<li>Own services throughout their lifecycle , including implementation, testing, deployment, observability, and incident response.</li>
<li>Work closely with Product, Design, and Architecture to define the “what” and “how” of features, ensuring solutions are both user-friendly and technically sound.</li>
<li>Partner with infrastructure and platform teams to optimize system performance and reliability.</li>
<li>Mentor and support engineers across the team, fostering a culture of quality, ownership, and continuous improvement.</li>
<li>Contribute to cross-functional planning, architectural reviews, and team-wide engineering practices.</li>
</ul>
<p>Experience:</p>
<ul>
<li>8+ years of experience building modern web applications in a full-stack environment.</li>
<li>Deep expertise in TypeScript, ReactJS, and Node.js (Express or similar frameworks).</li>
<li>Experience designing APIs and building robust services at scale in a distributed, cloud-based architecture.</li>
<li>Experience with PostgreSQL, Docker, and Kubernetes.</li>
<li>Experience delivering elegant, enterprise-grade user experiences by partnering with Product and Design teams in a fast-paced, agile environment.</li>
<li>Ability to collaborate closely with Architects to make smart technical tradeoffs, and drive alignment across teams.</li>
<li>Passion for craftsmanship and high engineering standards (testing, monitoring, documentation, scalability).</li>
<li>Excellent communication skills, with the ability to lead technical discussions and build consensus across functions.</li>
<li>A growth mindset and interest in mentoring others and upleveling the team.</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Exposure to low-code/no-code tools, workflow engines, or visual development platforms.</li>
<li>Interest in AI-assisted developer tooling or automation.</li>
</ul>
<p>Education and Training:</p>
<ul>
<li>Bachelor’s in computer science, or relevant industry experience</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>TypeScript, ReactJS, Node.js, PostgreSQL, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7155588</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc23dcd4-30e</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a talented Software Engineer to join our Ads team. As a backend engineer, you&#39;ll work on building scalable microservices and APIs that power our advertiser-facing product, ads.reddit.com. You&#39;ll also collaborate with the platform and data teams to build new features and improve operational stability.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Working with product managers to design and implement Ads products</li>
<li>Collaborating closely with the platform and data teams while building new features</li>
<li>Leading the processes needed to improve operational stability, including improving code quality, delivering dashboards and data visualizations</li>
<li>Building extensible components that align with product objectives</li>
<li>Supporting day-to-day project management tasks, including communicating project updates, managing project timelines, and overseeing project execution</li>
</ul>
<p>To succeed in this role, you&#39;ll need:</p>
<ul>
<li>3+ years of software development experience in one or more general-purpose programming languages (Java, Scala, Go, C++, Python)</li>
<li>Ability to take complete ownership of a feature or project</li>
<li>Experience working in the Ads domain is a plus</li>
<li>Interest in the advertising business and understanding customer needs is a plus</li>
</ul>
<p>We offer a range of benefits, including global benefit programs, family planning support, gender-affirming care, mental health and coaching benefits, comprehensive medical benefits, and more.</p>
<p>If you&#39;re passionate about building scalable and reliable software systems, and want to join a team that&#39;s dedicated to innovation and growth, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Scala, Kafka, Postgres, BigQuery, Redis, Druid, Kubernetes, Argo, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/6909093</Applyto>
      <Location>Remote - Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2075095a-d93</externalid>
      <Title>Senior Software Engineer, BizTech(AI Products)</Title>
      <Description><![CDATA[<p><strong>Job Title</strong></p>
<p>Senior Software Engineer, AI Products (India)</p>
<p><strong>Company Overview</strong></p>
<p>Airbnb is a global online marketplace for booking accommodations, with over 5 million hosts and 2 billion guest arrivals.</p>
<p><strong>The Community You Will Join</strong></p>
<p>The Airfam Products team exists to make every Airbnb employee more productive through a unified digital headquarters experience. As part of a 13-person cross-functional team of engineers, designers, researchers, and product managers, you&#39;ll work on platforms that serve Airbnb&#39;s entire global workforce. Our portfolio includes One Airbnb (the company&#39;s internal cultural hub with enterprise search, people profiles, and AI-powered chat), OneChat (Airbnb&#39;s enterprise AI assistant enabling secure LLM interactions), and a suite of tools that power how employees discover information, connect with colleagues, and get work done. You&#39;ll be joining the AI for Non-Developers workstream, focused on expanding AI productivity tools to all Airbnb employees,building OneChat Agents, deep research capabilities, artifact creation tools, and task automation that make AI accessible to everyone, regardless of technical background.</p>
<p><strong>The Difference You Will Make</strong></p>
<p>As a Senior Software Engineer on the Airfam Products team, you&#39;ll be instrumental in building Airbnb&#39;s next generation of AI-powered employee experience platforms. Your work will be a force multiplier for the entire company,every AI feature you ship, every system you architect, and every engineer you mentor will amplify productivity across Airbnb&#39;s global workforce. You will:</p>
<ul>
<li>Democratize AI by building tools that empower non-technical employees to leverage the power of LLMs</li>
<li>Drive innovation by taking AI prototypes from concept to production at scale</li>
<li>Shape the future of how Airbnb employees work, collaborate, and discover information</li>
</ul>
<p><strong>A Typical Day</strong></p>
<ul>
<li>Lead the technical design and implementation of LLM-powered features for OneChat and enterprise AI tools, including RAG pipelines, AI agents, and prompt optimization</li>
<li>Partner with product managers, designers, and cross-functional teams to translate user problems into AI-powered solutions that serve Airbnb&#39;s global workforce</li>
<li>Develop and iterate on agentic AI capabilities, including multi-step reasoning, tool use, and context-aware decision-making</li>
<li>Implement evaluation pipelines and quality systems to measure model performance, detect hallucinations, and ensure responsible AI practices</li>
<li>Own production AI systems end-to-end, including deployment strategies, monitoring, alerting, and incident response</li>
<li>Collaborate with the DevAI team on AirChat SDK integrations, MCP (Model Context Protocol) implementations, and Glean Action Packs</li>
<li>Mentor engineers (L6-L8) through design reviews, architecture discussions, and pair programming sessions</li>
<li>Stay current with the rapidly evolving GenAI landscape, evaluating new models and techniques for potential application</li>
<li>Balance hands-on technical contributions with technical leadership activities</li>
</ul>
<p><strong>Your Expertise</strong></p>
<ul>
<li>8+ years of software engineering experience, with significant focus on building production AI/ML systems</li>
<li>2+ years of hands-on experience with Large Language Models (LLMs), including fine-tuning, prompt engineering, embeddings, and retrieval-augmented generation (RAG)</li>
<li>Strong proficiency in backend technologies (TypeScript, Go, or Java)</li>
<li>Strong backend and distributed systems expertise, including API design (REST, GraphQL) and cloud infrastructure (AWS, GCP, or Azure)</li>
<li>Track record of shipping AI-powered products from prototype to production</li>
<li>Proven ability to collaborate cross-functionally and influence without authority</li>
<li>Excellent communication skills with ability to distill complex technical concepts for diverse audiences</li>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or equivalent practical experience</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Master&#39;s or PhD in Computer Science, Machine Learning, or related field</li>
<li>Experience building AI agents and multi-agent systems, preferably using Claude</li>
<li>Experience building integrations using MCP</li>
<li>Experience with containerization and orchestration (Docker, Kubernetes)</li>
<li>Background in building enterprise-grade internal tools and developer productivity platforms</li>
<li>Experience with frontend technologies (React, Next.js) for full-stack AI product development</li>
<li>Contributions to open-source Gen AI/ML projects or publications at top venues</li>
</ul>
<p><strong>Your Location</strong></p>
<p>This position is based in Bangalore, India with a hybrid work arrangement. You&#39;ll collaborate with teammates across global time zones, with primary alignment to Pacific Time for key meetings.</p>
<p><strong>Our Commitment to Inclusion &amp; Belonging</strong></p>
<p>Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions. All qualified individuals are encouraged to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, production AI/ML systems, Large Language Models (LLMs), backend technologies (TypeScript, Go, or Java), API design (REST, GraphQL), cloud infrastructure (AWS, GCP, or Azure), master&apos;s or PhD in Computer Science, Machine Learning, or related field, experience building AI agents and multi-agent systems, experience building integrations using MCP, experience with containerization and orchestration (Docker, Kubernetes), background in building enterprise-grade internal tools and developer productivity platforms, experience with frontend technologies (React, Next.js) for full-stack AI product development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for booking accommodations, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7730723</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d39c6a2f-6a4</externalid>
      <Title>Senior Software Engineer - Data + AI Observability</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer on the Customer Foresight Team, you will lead the development of products that customers use to get insights on their AI and data workloads, optimize their performance, and lower costs.</p>
<p>This role requires leading the technical development of product milestones from refinement of requirements, through execution, operation, and iterating with the broader product development team and partner teams to ensure product success.</p>
<p>The impact you will have:</p>
<ul>
<li>Develop systems that make it simple for customers to answer questions about their Databricks environment by providing a reliable and timely interface for observability</li>
<li>Make it easy for frameworks teams across Databricks to publish their data to customers</li>
<li>Effectively lead large milestones from the observability roadmap, contribute to the long-term vision and requirements development for Databricks products</li>
<li>Mentor other engineers towards contributing to the product and their career growth</li>
<li>Optimise the scalability and cost of large data pipelines towards reducing costs for customers and opening up new product opportunities</li>
<li>Drive integration with popular tools and services in the broader data warehousing ecosystem</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years of production level experience in one of: Java, Scala, Python, C++, or similar language</li>
<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes</li>
<li>Proven track record in architecting, developing, deploying, and operating components of large scale distributed systems</li>
<li>Strong software engineering maturity: security, correctness, engineering excellence, operational excellence</li>
<li>Platform mindset: effective building platforms for other software teams, iterating quickly to reduce friction to adoption, advocating for platform use</li>
<li>Cross-functional skills: we have platform customers internal at Databricks as well as external customers, which requires strong cross-functional skills</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, Python, C++, AWS, Azure, GCP, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7897431002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c3299844-c42</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p><strong>The Opportunity</strong></p>
<p>The Migration Services team builds the critical, data-driven services that seamlessly move customers across environments in real-time. We are looking for a Senior Software Engineer who is passionate about crafting elegant solutions to complex distributed systems problems. You will be a key player in driving innovation, collaborating with architects and product managers to build and own the crucial infrastructure that underpins the Auth0 ecosystem. If you are excited by the prospect of making a massive impact, we want to hear from you!</p>
<p><strong>What You&#39;ll Achieve</strong></p>
<ul>
<li>Build for scale. You will develop, and operate highly scalable, data-intensive services, demonstrating code craftsmanship and an eye for detail.</li>
<li>Master the data stream. You&#39;ll leverage streaming technologies and implement advanced change data capture (CDC) strategies to ensure the secure, reliable, and efficient transfer of data.</li>
<li>Drive operational excellence. Through continuous monitoring and performance tuning, you will enhance the reliability of our migration processes and participate in our team&#39;s on-call rotation to ensure our services are always on.</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>Proven engineering background. With 3+ years of experience in fast-paced, agile environments, you have a proven track record of shipping high-quality software.</li>
<li>Database familiarity. You possess a strong understanding of database fundamentals and have hands-on experience with datastores like MongoDB and PostgreSQL.</li>
<li>Go is your go-to. You have a strong proficiency in Golang or optionally, in node.js.</li>
<li>A passion for reliability. You have interest and experience in reliability engineering, with familiarity with observability and incident management.</li>
<li>Collaborative skills. Your excellent written and verbal communication skills enable you to collaborate effectively with cross-functional and geo-dispersed teams.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with distributed streaming platforms like Kafka.</li>
<li>Familiarity with concepts in the IAM (Identity and Access Management) domain.</li>
<li>Experience with cloud providers (AWS, Azure) and container technologies such as Kubernetes and Docker.</li>
</ul>
<p>#Hybrid</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, MongoDB, PostgreSQL, Distributed systems, Reliability engineering, Observability, Incident management, Kafka, IAM, Cloud providers, Container technologies, Kubernetes, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7809897</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fca5411d-4fb</externalid>
      <Title>Staff Site Reliability Engineer - Kubernetes</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Workforce Identity Cloud</p>
<p>Okta Workforce Identity Cloud (WIC) provides easy, secure access for your workforce so you can focus on other strategic priorities,like reducing costs, and doing more for your customers.</p>
<p>If you like to be challenged and have a passion for solving large-scale automation, testing, and tuning problems, we would love to hear from you. The ideal candidate is someone who exemplifies the ethics of, “If you have to do something more than once, automate it” and who can rapidly self-educate on new concepts and tools.</p>
<p><strong>Position Overview:</strong></p>
<p>The Site Reliability Engineer (SRE) will play a key role in building and managing Kubernetes platforms that support cloud-native applications and services. This position focuses on architecting and managing reliable, scalable, and secure Kubernetes-based platforms on AWS, ensuring high availability and performance while optimising costs and automation. The ideal candidate will have hands-on experience with AWS infrastructure, Kubernetes platform creation, Helm charts, Karpenter scaling, and Istio service mesh.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Kubernetes Platform Creation: Design, implement, and maintain highly available, scalable, and fault-tolerant Kubernetes platforms. Ensure clusters are optimised for production workloads, providing high resilience and operational efficiency.</li>
</ul>
<ul>
<li>AWS Infrastructure Management: Build, manage, and optimise AWS cloud infrastructure, including EKS, ECS, S3, VPCs, RDS, IAM, and more. Implement best practices for cost management, scaling, and security within AWS.</li>
</ul>
<ul>
<li>Helm Management: Utilise Helm to automate and streamline the deployment of applications and services to Kubernetes clusters. Create, maintain, and manage Helm charts for production-ready deployments.</li>
</ul>
<ul>
<li>Karpenter Implementation: Implement and manage Karpenter to dynamically scale Kubernetes clusters in response to workload demands.</li>
</ul>
<ul>
<li>Istio Service Mesh Management: Configure and manage Istio to provide service-to-service communication, security, and observability within the Kubernetes clusters. Enable fine-grained traffic management, service discovery, and policy enforcement.</li>
</ul>
<ul>
<li>Platform Automation &amp; Scaling: Automate the deployment, scaling, and management of infrastructure and applications. Work with CI/CD pipelines to ensure a seamless flow from development to production with minimal downtime.</li>
</ul>
<ul>
<li>Incident Management &amp; Troubleshooting: Respond to incidents, troubleshoot, and resolve system issues related to performance, availability, and security in a timely and effective manner.</li>
</ul>
<ul>
<li>Security &amp; Compliance: Design and implement secure cloud infrastructure with appropriate access controls, network security, and compliance frameworks.</li>
</ul>
<ul>
<li>Documentation &amp; Knowledge Sharing: Create and maintain detailed documentation for Kubernetes platform setup, operational procedures, and best practices. Promote knowledge sharing across teams.</li>
</ul>
<p><strong>Required Qualifications:</strong></p>
<ul>
<li>4+ years of experience with Kubernetes/Helm;</li>
</ul>
<ul>
<li>4+ years of Experience with Terraform.</li>
</ul>
<ul>
<li>5+ years of Experience with AWS</li>
</ul>
<ul>
<li>Experience with multi-region cloud environments.</li>
</ul>
<ul>
<li>Proven experience with AWS (EC2, RDS, S3, CloudFormation, IAM, etc.) and solid understanding of cloud-native architectures.</li>
</ul>
<ul>
<li>Strong expertise in Kubernetes platform creation, management, and optimisation (e.g., setting up highly available clusters, networking, and storage).</li>
</ul>
<ul>
<li>Hands-on experience with Helm for Kubernetes application deployment and management.</li>
</ul>
<ul>
<li>Practical experience with Karpenter for dynamic scaling of Kubernetes clusters and optimising resource usage.</li>
</ul>
<ul>
<li>Expertise in managing and securing Istio for service mesh, including traffic management, security, and observability features.</li>
</ul>
<ul>
<li>Proficiency in CI/CD pipelines and automation tools (e.g., Jenkins, GitLab, CircleCI, Terraform, Ansible, Spinnaker).</li>
</ul>
<ul>
<li>Strong scripting and automation skills in Python, Bash, or Go for infrastructure management and platform automation.</li>
</ul>
<ul>
<li>Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, CloudWatch, and ELK Stack.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Understanding of security best practices for cloud platforms and Kubernetes (e.g., role-based access control (RBAC), encryption, and compliance frameworks).</li>
</ul>
<ul>
<li>Familiarity with Docker and containerization principles.</li>
</ul>
<ul>
<li>Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent professional experience).</li>
</ul>
<ul>
<li>Certifications (Preferred): CKA (Certified Kubernetes Administrator), CKAD (Certified Kubernetes Application Developer), or AWS Certified DevOps Engineer are highly desirable.</li>
</ul>
<p>Additional requirements:</p>
<ul>
<li>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</li>
</ul>
<ul>
<li>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</li>
</ul>
<p>#LI-Hybrid</p>
<p>#LI-LSS1</p>
<p>requisition ID- (P16373_3396241)</p>
<p>The annual base salary range for this position for candidates located in the San Francisco Bay area is between: $194,000-$267,000 USD</p>
<p>Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.</p>
<p>The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between:$174,000-$214,000 USD</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
</ul>
<ul>
<li>Driving Social Impact</li>
</ul>
<ul>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$174,000-$214,000 USD</Salaryrange>
      <Skills>Kubernetes, Helm, Terraform, AWS, Cloud-native architectures, Kubernetes platform creation, Kubernetes management, Kubernetes optimisation, Helm for Kubernetes application deployment, Karpenter for dynamic scaling, Istio for service mesh, CI/CD pipelines, Automation tools, Python, Bash, Go, Monitoring, Logging, Alerting, Security best practices for cloud platforms and Kubernetes, Docker and containerization principles, Certified Kubernetes Administrator, Certified Kubernetes Application Developer, AWS Certified DevOps Engineer</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7743339</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>946d6893-cbb</externalid>
      <Title>Infrastructure Security Engineer (USA)</Title>
      <Description><![CDATA[<p>As a member of the Infrastructure Security Team within the Product Security Department, you will work with teams across GitLab to ensure that the components that comprise our cloud infrastructure are built with the resiliency and security expectations that our customers depend on to power their software factories.</p>
<p>We’re looking for an Intermediate Infrastructure Security Engineer to further our automation efforts in support of our GitLab Dedicated for Government product offering. You’ll have the opportunity to contribute to tooling that operates our FedRAMP environment, identify and develop remediations for infrastructure vulnerabilities, and partner with more senior engineers to review upcoming project architectures to ensure that they are built to the rigorous standards we hold.</p>
<p>Support the Public Sector SRE team as a stable counterpart, identify and help mitigate security issues, misconfigurations, and vulnerabilities related to GitLab’s cloud, container and Kubernetes infrastructure, build tooling to increase our visibility into environments to expedite vulnerability detection, own efforts securing GitLab&#39;s FedRAMP environment, support other security teams as an Infrastructure SME, document best practices and remediations to help engineers learn from common vulnerability types, partner with senior engineers to review new architectures and projects and provide feedback cross-functionally, fulfill the Product Security Division Mission of securing GitLab Infrastructure with our own product (“dogfooding”).</p>
<p>To be successful in this role, you will need to have hands-on experience with public cloud providers (ex. AWS, GCP, Azure), development experience with Ruby, Python, Go, experience with Infrastructure-as-Code (IaC) tools (ex. Terraform, Ansible, Chef), knowledge of the Linux operating system, familiarity with containers (Docker) and orchestration platforms (Kubernetes), an interest in Information Security, demonstrated experience working collaboratively with cross-functional teams, proficiency to communicate over a text-based medium (Slack, GitLab Issues, Email) and can succinctly document technical details, share our values, and work in accordance with those values.</p>
<p>Due to government requirements, you must be a United States Citizen (defined as any individual who is a citizen of the United States by law, birth, or naturalization) to fill this position.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$103,600-$185,000 USD</Salaryrange>
      <Skills>public cloud providers, Ruby, Python, Go, Infrastructure-as-Code (IaC) tools, Linux operating system, containers (Docker), orchestration platforms (Kubernetes), Information Security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8459132002</Applyto>
      <Location>Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a13444d1-8fb</externalid>
      <Title>Staff Software Engineer (Platform - Financial Engineering)</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Staff Software Engineer to join our Financial Engineering team.</p>
<p>As a Staff Software Engineer, you will be responsible for architecting and building foundational backend systems with a focus on performance, scalability, and reliability.</p>
<p>You will drive strategic technical direction for complex, cross-team initiatives and establish and evolve platform best practices, frameworks, and architectural standards within the team.</p>
<p>You will provide deep technical mentorship, guide design decisions, and raise the bar for engineering quality.</p>
<p>Collaborate with product, finance, and engineering leadership to shape the team&#39;s technical roadmap.</p>
<p>Drive system improvements by embedding AI into engineering and operational practices.</p>
<p>What we look for in you:</p>
<p>8+ years building and operating large-scale distributed systems in production.</p>
<p>Deep expertise in backend programming (e.g., Go, Python, Java) and cloud-native architecture.</p>
<p>Proven track record designing highly available, high-performance systems.</p>
<p>Ability to anticipate scaling bottlenecks and take proactive measures.</p>
<p>Experience leading cross-functional technical initiatives and mentoring engineers.</p>
<p>Ability to distill complex technical concepts into clear, actionable solutions.</p>
<p>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</p>
<p>Nice to haves:</p>
<p>Experience with financial data, accounting systems, or high-precision transaction processing OR</p>
<p>Experience in the Auth domain.</p>
<p>You’ve worked with Golang, Ruby, Docker, Rails, Postgres, MongoDB or DynamoDB.</p>
<p>You have gone through rapid growth in your company (from startup to mid-size).</p>
<p>Job # : P75025</p>
<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below.</p>
<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$218,025-$256,500 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$218,025-$256,500 USD</Salaryrange>
      <Skills>backend programming, cloud-native architecture, large-scale distributed systems, highly available systems, high-performance systems, generative AI tools, copilots, LibreChat, Gemini, Glean, financial data, accounting systems, high-precision transaction processing, Auth domain, Golang, Ruby, Docker, Rails, Postgres, MongoDB, DynamoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet service provider.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7685208</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bc3394a5-691</externalid>
      <Title>Senior Software Engineer, Applied AI (Fullstack)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Okta&#39;s Business Technology organisation builds secure and intelligent internal platforms that power our global workforce. Our AI &amp; Automation team is delivering next-generation tools and experiences by integrating GenAI and intelligent automation into workflows across IT, HR, Finance, Sales, Marketing and Customer Support.</p>
<p>We focus on real-world applications: virtual agents, AI copilots, internal RAG services, and AI-augmented self-service portals , all with scale, governance, and user experience in mind.</p>
<p><strong>The Opportunity</strong></p>
<p>As a Senior Software Engineer, Applied AI, you&#39;ll play a key role in building user-facing and backend systems that leverage GenAI to improve internal experiences and operations. This role requires strong full-stack engineering skills, with an emphasis on both AI integration and building intuitive, performant UIs that make AI accessible and useful to our internal customers.</p>
<p>You&#39;ll work closely with software engineers, product managers, and designers to build secure, intelligent tools for employees across Okta.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Design and build end-to-end GenAI-powered applications, including web-based UIs, API services, and backend orchestration.</li>
</ul>
<ul>
<li>Implement and integrate LLM-based experiences using frameworks like LangChain, LlamaIndex, and tools like OpenAI, Claude, or Gemini.</li>
</ul>
<ul>
<li>Define, implement, and champion operational excellence standards (SLOs, observability, incident response frameworks) for all services deployed.</li>
</ul>
<ul>
<li>Develop responsive, accessible, and modern frontend interfaces using frameworks like React or Vue , with a focus on usability, performance, and trust in AI outputs.</li>
</ul>
<ul>
<li>Build and maintain a library of reusable frontend components and hooks that allow other business delivery teams to easily &#39;drop in&#39; GenAI capabilities into their own applications.</li>
</ul>
<ul>
<li>Build and maintain retrieval-augmented generation (RAG) pipelines with vector search and embedding strategies (e.g., Pinecone, FAISS, Qdrant).</li>
</ul>
<ul>
<li>Collaborate with designers and product managers to rapidly iterate on UX patterns for AI-powered experiences (e.g., prompt inputs, citations, summaries).</li>
</ul>
<ul>
<li>Ensure security, privacy, observability, and test coverage across the full stack.</li>
</ul>
<ul>
<li>Contribute to architecture decisions, engineering standards, and best practices for AI/automation systems.</li>
</ul>
<ul>
<li>Partner with platform and infrastructure teams to ensure services scale reliably across the org.</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>5–8 years of software engineering experience with full-stack development, including 2+ years of building AI/ML-driven applications.</li>
</ul>
<ul>
<li>Strong Python development skills and 5+ years experience building cloud-based services using AWS, Docker, and RESTful APIs.</li>
</ul>
<ul>
<li>2+ years of experience in frontend technologies like React, TypeScript, or Vue, and comfort working on UI/UX for internal tools or enterprise applications.</li>
</ul>
<ul>
<li>Hands-on experience with LLM integration, RAG pipelines, prompt engineering, or orchestration frameworks like LangChain or LlamaIndex.</li>
</ul>
<ul>
<li>Strong background in distributed systems, APIs, microservices, container orchestration (ECS/EKS), and cloud platforms (AWS/GCP/Azure).</li>
</ul>
<ul>
<li>Familiarity with secure coding, authentication/authorisation, and internal data governance best practices.</li>
</ul>
<ul>
<li>Ability to collaborate across engineering, design, and product teams , with a strong sense of user empathy and technical ownership.</li>
</ul>
<ul>
<li>Bonus: Exposure to design systems, AI evaluation tooling, or real-time application performance monitoring.</li>
</ul>
<p><strong>Why Join Okta</strong></p>
<ul>
<li>Make AI Real: Design and build AI-powered apps used daily by Okta employees.</li>
</ul>
<ul>
<li>Full-Stack Challenge: Tackle end-to-end problems , from LLM orchestration to intuitive UIs.</li>
</ul>
<ul>
<li>Trusted Innovation: Join a team committed to security, ethics, and technical excellence in AI.</li>
</ul>
<p>#LI-MK1</p>
<p>#LI-hybrid</p>
<p>P24739_3355024</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000-$247,000 USD</Salaryrange>
      <Skills>Python, AWS, Docker, RESTful APIs, React, TypeScript, Vue, LLM integration, RAG pipelines, prompt engineering, orchestration frameworks, distributed systems, APIs, microservices, container orchestration, cloud platforms, design systems, AI evaluation tooling, real-time application performance monitoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a cloud identity and access management company that provides security and identity solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7599857</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64989723-d54</externalid>
      <Title>Staff Software Engineer, Platform Streaming (Auth0)</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our Streaming Foundations team. As a Staff Software Engineer, you will help set the technical direction for the team and influence the engineering roadmap for the Platform&#39;s streaming capabilities. You will design and lead the implementation of our most complex and critical systems for data-intensive use cases. You will research and champion new technologies and architectural patterns to solve strategic challenges and scale the platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Helping set the technical direction for the team and influencing the engineering roadmap for the Platform&#39;s streaming capabilities</li>
<li>Designing and leading the implementation of our most complex and critical systems for data-intensive use cases</li>
<li>Researching and championing new technologies and architectural patterns to solve strategic challenges and scale the platform</li>
<li>Leading and influencing cross-functional initiatives, ensuring technical alignment and successful execution across multiple teams</li>
<li>Improving the operational posture of our systems by designing for observability, reliability, and scalability, and by mentoring others in operational best practices</li>
<li>Coaching and mentoring senior engineers and acting as a technical leader across the engineering organization</li>
</ul>
<p>You will bring to our teams:</p>
<ul>
<li>5+ years of software development experience in a fast-paced, agile environment</li>
<li>Experience working with Golang or Java is preferred</li>
<li>Hands-on experience designing, developing and tuning highly-scalable, event-driven systems</li>
<li>Solid understanding of database fundamentals and experience with event streaming technologies such as Kafka</li>
<li>A passion and interest to work on systems that are highly reliable, maintainable, scalable and secure</li>
</ul>
<p>Extra points:</p>
<ul>
<li>Experience with front-end technologies such as TypeScript and React</li>
<li>Familiarity with cloud providers (AWS, Azure) and container technologies such as Kubernetes, Docker</li>
<li>Familiarity with or interest in the Identity and Access Management (IAM) business domain</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,000-$220,000 CAD</Salaryrange>
      <Skills>Golang, Java, database fundamentals, event streaming technologies, Kafka, scalable systems, secure systems, TypeScript, React, cloud providers, container technologies, Kubernetes, Docker, Identity and Access Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7630523</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f64d6ed-6a9</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our team. As a Senior Software Engineer, you will build, evolve, and operate backend services at scale for ZoomInfo. You&#39;ll work primarily with Node.js/TypeScript (NestJS preferred), design robust REST/GraphQL APIs, optimize MongoDB/Redis, and deploy on cloud (GCP preferred or AWS) with a strong focus on reliability, performance, security, and cost efficiency.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, implement, and own microservices and REST/GraphQL APIs in Node.js/TypeScript (NestJS preferred)</li>
<li>Translate product requirements into technical designs; break down work, estimate, and deliver incrementally</li>
<li>Model data and optimize queries in MongoDB; implement effective caching with Redis (TTL, eviction, hot-key mitigation)</li>
<li>Ship production-ready code with unit/integration tests; participate in on-call, incident response, and postmortems</li>
<li>Containerize and deploy via Docker/Kubernetes; automate builds and releases with CI/CD (blue/green or canary)</li>
<li>Instrument services for logs, metrics, and traces (p95/p99); continuously improve latency, reliability, and cost</li>
<li>Review code, document designs, and mentor SE II/III engineers; contribute to shared standards and best practices</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of software engineering experience, including 3+ years building backend services in Node.js/TypeScript</li>
<li>Strong API fundamentals: versioning, pagination, authN/Z (OAuth/OIDC), and secure coding (OWASP)</li>
<li>Hands-on with NestJS/Express/Fastify; familiarity with microservices patterns and event-driven workflows</li>
<li>MongoDB expertise (schema design, indexing, basic sharding concepts) and Redis caching patterns</li>
<li>Cloud experience on GCP (preferred) or AWS; Docker; working knowledge of Kubernetes; CI/CD with GitHub Actions/Jenkins/GitLab</li>
<li>Observability skills: Datadog/OpenTelemetry/Prometheus/Grafana; confident debugging in production</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Kafka or Pub/Sub; API Gateway/Ingress; feature flags; rate limiting and quotas</li>
<li>Terraform/Helm; security tooling (SonarQube), dependency hygiene, secret management</li>
<li>Performance profiling, load testing, and practical cost optimization</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, TypeScript, NestJS, MongoDB, Redis, Docker, Kubernetes, CI/CD, API fundamentals, Microservices, Event-driven workflows, Observability, Kafka, Pub/Sub, API Gateway, Ingress, Feature flags, Rate limiting, Quotas, Terraform, Helm, Security tooling, Dependency hygiene, Secret management, Performance profiling, Load testing, Cost optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a publicly traded company that provides a go-to-market intelligence platform for businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8305634002</Applyto>
      <Location>Bengaluru, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>682f5f72-49b</externalid>
      <Title>Senior Site Reliability Engineer, Edge - TS/SCI</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>About the Team</strong></p>
<p>At Okta, our motto is &quot;Always On.&quot; Within the Technical Operations (TechOps) team, we live this mission by building the most reliable and performant systems on the planet. We empower organisations to do their most significant work by securely connecting any person, on any device, to the technologies they need.</p>
<p><strong>The Role</strong></p>
<p>We are seeking a Senior Site Reliability Engineer (SRE) to lead the evolution of our large-scale production systems. This role is designed for a technical expert who thrives on solving complex problems at scale and lives by the ethic: &quot;If you have to do it twice, automate it.&quot; Based in the Washington, D.C. area, you will ensure our infrastructure maintains uncompromising reliability and performance while supporting critical national security missions in secure, restricted environments.</p>
<p>Security Requirement: Must be able to obtain and maintain a U.S. security clearance (Secret or Top Secret) to the extent required by U.S. Government contracts.</p>
<p>The selected candidate may be subject to drug testing to the extent required by U.S. Government contracts.</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Infrastructure Leadership: Design, build, and oversee Okta’s production infrastructure, ensuring architectural integrity and peak performance.</li>
</ul>
<ul>
<li>Incident Engineering: Act as a senior escalation point for production incidents, conducting deep-dive root cause analysis and implementing permanent, automated preventive solutions.</li>
</ul>
<ul>
<li>Strategic Automation: Eliminate manual toil by developing sophisticated automation frameworks, evolving monitoring tools, and establishing rigorous technical documentation.</li>
</ul>
<ul>
<li>System Resilience: Optimize a highly available, large-scale environment, ensuring &quot;Always On&quot; service delivery across complex network topologies.</li>
</ul>
<ul>
<li>Mentorship: Provide technical guidance to the engineering organisation, championing SRE best practices and a culture of self-education.</li>
</ul>
<p><strong>What You’ll Bring</strong></p>
<p><strong>Core Requirements</strong></p>
<ul>
<li>Clearance: Active TS/SCI with Polygraph.</li>
</ul>
<ul>
<li>Compliance Expertise: Deep professional experience with FedRAMP and DoD IL6 frameworks.</li>
</ul>
<ul>
<li>Education: B.S. in Computer Science or equivalent technical experience.</li>
</ul>
<p><strong>Technical Expertise</strong></p>
<ul>
<li>Networking &amp; Cloud Architecture: Mastery of AWS networking and security, including Transit Gateways, VPCs, Route Tables, ELBs, and NACLS.</li>
</ul>
<ul>
<li>Infrastructure as Code (IaC): Advanced experience automating enterprise-scale infrastructure via Terraform or CloudFormation.</li>
</ul>
<ul>
<li>Systems &amp; Scripting: Expert-level Linux systems administration with proficiency in Go, Python, Bash, or Ruby.</li>
</ul>
<ul>
<li>Production Support: Proven success managing Docker containers and Java-based stacks (Apache/Tomcat) in high-security production environments.</li>
</ul>
<p>Protocol Knowledge: Solid understanding of networking concepts, IP protocols, and multi-cloud infrastructure.</p>
<p>#LI-TM</p>
<p>#LI-Hybrid</p>
<p>P24505</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$159,000-$218,900 USD</Salaryrange>
      <Skills>AWS networking and security, Terraform or CloudFormation, Linux systems administration, Go, Python, Bash, or Ruby, Docker containers and Java-based stacks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7562925</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e446846a-df1</externalid>
      <Title>Staff Software Engineer, Frontend (Consumer - Advanced Trading)</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We are seeking a Staff Software Engineer to lead the front-end technical strategy for our advanced trading platform, Coinbase Advanced.</p>
<p>This role is designed for a technical leader who can drive significant performance improvements across our web and mobile clients while mentoring a growing team of engineers.</p>
<p>As a Staff Software Engineer, you will collaborate with Product and Engineering leadership to define and execute a technical roadmap for a reliable, scalable trading platform.</p>
<p>You will lead the technical initiative to deliver a low latency interface for experienced traders across all platforms.</p>
<p>You will provide technical guidance and career mentorship to engineers.</p>
<p>You will anchor complex projects and ensure high standards for code quality and system design.</p>
<p>We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up.</p>
<p>We want someone who will run towards, not away from, solving the company’s hardest problems.</p>
<p>Our work culture is intense and isn’t for everyone.</p>
<p>But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be.</p>
<p>While many roles at Coinbase are remote-first, we are not remote-only.</p>
<p>In-person participation is required throughout the year.</p>
<p>Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment.</p>
<p>Attendance is expected and fully supported.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$218,025-$256,500 USD</Salaryrange>
      <Skills>modern front-end frameworks, mobile environments, complex systems, rendering performance, network efficiency, state management, leadership ability, mentoring senior engineers, organizational improvements, communication skills, Golang, Ruby, Docker, Sinatra, Rails, Postgres, crypto-forward experience, blockchain technology, rapid growth in company, low latency trading system, decomposing large monolith into microservices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet service that allows users to buy, sell, and store digital currencies.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7629141</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>678647af-3f7</externalid>
      <Title>Staff Software Engineer (Money)</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to join our Money team at Databricks India. As one of the first engineers for Money at Databricks India, you will be key to building a base for one of Databricks&#39; most central engineering teams.</p>
<p>Your role is crucial in helping bring diverse business needs together, including abuse prevention, product commercialisation motions, and reliable product availability at scale. You will work closely with infrastructure as well as product teams in bringing critical governance functionality to Databricks customers.</p>
<p>Responsibilities:</p>
<ul>
<li>Own Money systems and services that govern usage of all Databricks products and offerings.</li>
<li>Enhance engineering and infrastructure efficiency, reliability, accuracy, and response times, including CI/CD processes, test frameworks, data quality assurance, end-to-end reconciliation, and anomaly detection.</li>
<li>Collaborate with platform and product teams to develop and implement innovative infrastructure that scales to meet evolving needs.</li>
<li>Provide leadership in long-term vision and requirements development for Databricks products, in partnership with our engineering teams.</li>
<li>Represent Databricks at academic and industrial conferences &amp; events.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>BS/MS/PhD in Computer Science, or a related field</li>
<li>12+ years of production level experience in one of: Java, Scala, C++, or similar language.</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>
<li>Proven track record in architecting, developing, deploying, and operating large scale distributed systems.</li>
<li>Experience with software security and systems that handle sensitive data.</li>
<li>Demonstrated leadership skills and the ability to lead across functional and organizational boundaries.</li>
<li>A proactive approach and a passion for delivering high-quality solutions.</li>
<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, C++, Cloud technologies, Software security, Distributed systems, Leadership skills, AWS, Azure, GCP, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7654349002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>81e928a2-c9f</externalid>
      <Title>Senior Site Reliability Engineer (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity</p>
<p>We are looking for a Senior Site Reliability Engineer to join our SRE team based in Europe. As a Senior Site Reliability Engineer, you&#39;ll ensure our production systems are not only operational but also resilient, scalable, and ready for exponential growth.</p>
<p>This isn&#39;t just about keeping the lights on; it&#39;s about directly contributing to the platform&#39;s core resiliency and robustness. You&#39;ll be a hands-on builder, crafting solutions that make our system more reliable by design.</p>
<p>Responsibilities</p>
<ul>
<li>Design and build custom software in Go to enhance the platform&#39;s reliability, resiliency, and redundancy.</li>
<li>Partner with engineering teams to embed reliability principles, improving the availability, performance, and observability of our services.</li>
<li>Use your deep understanding of infrastructure and observability principles to identify opportunities for improvement within the product and implement solutions.</li>
<li>Contribute to our on-call rotation, providing rapid, effective response to critical incidents and using your expertise to troubleshoot, mitigate or accurately escalate production issues.</li>
<li>Develop and refine our SRE tooling and processes, focusing on automation and operational efficiency.</li>
<li>Define, document, and champion reliability best practices across the organisation.</li>
</ul>
<p>What you&#39;ll need to be successful</p>
<p>This role requires a unique blend of a software engineer&#39;s mindset and operational expertise. You&#39;ll thrive in this role if you have:</p>
<ul>
<li>A proactive and systematic approach to problem-solving, with a high degree of ownership.</li>
<li>Proven experience in a production environment supporting large-scale, mission-critical applications with a high degree of autonomy.</li>
<li>Proficiency in at least one programming language, with a preference for Go. You should be comfortable writing custom applications, not just scripts.</li>
<li>Experience with infrastructure as code (Terraform), container orchestration (Kubernetes, Docker) and GitOps (ArgoCD).</li>
<li>Demonstrable expertise in a major cloud provider (Azure, AWS, or GCP).</li>
<li>A strong grasp of microservices architecture, databases (SQL, NoSQL), and networking fundamentals, so you can understand how custom code can solve platform-level issues.</li>
<li>An understanding of core SRE principles, including SLIs, SLOs, and error budgets.</li>
<li>Experience in an on-call rotation for a 24/7 cloud-based environment.</li>
<li>Exceptional communication and collaboration skills, with a proven ability to work effectively in a remote, distributed team, where tasks may be self-driven.</li>
</ul>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Terraform, Kubernetes, Docker, GitOps, Cloud provider (Azure, AWS, or GCP), Microservices architecture, Databases (SQL, NoSQL), Networking fundamentals, Core SRE principles (SLIs, SLOs, error budgets)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides an unparalleled authentication experience for hundreds of millions of users worldwide. It is a large technology company.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7418982</Applyto>
      <Location>Barcelona, Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e69d3fc1-eae</externalid>
      <Title>Senior Software Engineer - Node.js</Title>
      <Description><![CDATA[<p>Join ZoomInfo as a Senior Software Engineer - Node.js and accelerate your career. Our team moves fast, thinks boldly, and empowers you to do the best work of your life. You&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>
<p>As a Senior Software Engineer, you will get to explore and work with cutting-edge technologies and a large and rich data set. If you like working on tough problems, whether that&#39;s building systems that handle millions of customer requests a day or how to make sense of over a billion pieces of potentially correlated data, ZoomInfo is the right place for you.</p>
<p>The ideal candidate is a seasoned engineer with a deep understanding of modern server-side technologies and distributed systems. They possess strong skills in building modular, maintainable, and scalable backend services with an emphasis on performance, reliability, and security. The candidate should have a keen eye for detail, a passion for building robust systems, and the ability to collaborate effectively within cross-functional teams.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, develop, and maintain high-performance backend services capable of handling millions of requests daily.</li>
<li>Collaborate with other team members and stakeholders to contribute to the design and evolution of scalable applications, ensuring scalability, reliability, and performance.</li>
<li>Work with TypeScript, NestJS, and Node.js to build and optimize backend applications.</li>
<li>Work with RESTful APIs, GraphQL, and integrate with external services, ensuring data consistency, robustness, and security.</li>
<li>Manage and optimize data storage solutions using MongoDB, Redis, ensuring efficient and reliable data access.</li>
<li>Integrate with Confluent Cloud to manage data streaming and real-time processing pipelines.</li>
<li>Conduct thorough code reviews to maintain high-quality standards across the codebase.</li>
<li>Collaborate with other engineers to solve complex and intriguing problems.</li>
<li>Stay up-to-date with the latest backend technologies and industry trends.</li>
<li>Contribute to the continuous improvement of our technology stack and development processes.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>8+ years of industry experience with a B.S. in Computer Science or equivalent.</li>
<li>Strong experience in backend development with TypeScript, NestJS, Node.js, and Java.</li>
<li>5+ years of experience with JavaScript/TypeScript and Node.js.</li>
<li>Proficiency in working with MongoDB and managing large-scale databases.</li>
<li>Experience with Confluent Cloud or similar data streaming platforms is a plus.</li>
<li>Familiarity with CI/CD tools for automating builds, testing, and deployments (e.g., Jenkins).</li>
<li>Proficiency in working with RESTful APIs and GraphQL.</li>
<li>Must be able to work independently and deliver excellent results in short timelines.</li>
<li>Technically lead and mentor juniors in the team, and drive planning and execution of work.</li>
<li>Experience with containerization and orchestration tools (Docker, Kubernetes).</li>
<li>Strong problem-solving and debugging skills with experience in high-traffic applications.</li>
<li>Experience with backend technologies (Node.js, Python, or Java) and microservices architecture.</li>
<li>Excellent communication and collaboration skills.</li>
<li>Ability to thrive in a dynamic, fast-paced environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>TypeScript, NestJS, Node.js, Java, JavaScript, MongoDB, Redis, Confluent Cloud, CI/CD, RESTful APIs, GraphQL, Containerization, Orchestration, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a NASDAQ-listed company that provides a Go-To-Market Intelligence Platform for businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8226022002</Applyto>
      <Location>Bengaluru, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>50f401de-7b1</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>Who we are</p>
<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>
<p>As we continue to revolutionize how the world interacts, we&#39;re acquiring new skills and experiences that make work feel truly rewarding.</p>
<p>Your career at Twilio is in your hands.</p>
<p>We use Artificial Intelligence (AI) to help make our hiring process efficient. That said, every hiring decision is made by real Twilions!</p>
<p>Join the team as Twilio&#39;s next Staff Software Engineer</p>
<p>About the job</p>
<p>This position is needed to harden, optimize, and scale the real-time event-aggregation services that power our Observability Insights/Analytics platform.</p>
<p>We are seeking a Staff Software Engineer with deep Java expertise to own high-throughput stream-processing microservices (Kafka Streams / Flink) deployed on AWS EKS, tune ClickHouse for millisecond-latency writes, and embed observability that keeps incident minutes near zero.</p>
<p>You will design resilient, high-performance systems capable of processing &gt;250K events/sec with p99 latencies under 200ms, while championing DevSecOps practices and mentoring junior engineers.</p>
<p>Responsibilities</p>
<p>In this role, you&#39;ll:</p>
<ul>
<li>Design, build, and maintain high-performance Java microservices using Spring Boot, capable of ingesting &gt;250K events/sec with p99</li>
</ul>
<p>Qualifications</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p>If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p>We are always looking for people who will bring something new to the table!</p>
<p>*Required:</p>
<ul>
<li>8+ years of professional Java development experience with mastery of high-performance and low-latency design patterns</li>
</ul>
<ul>
<li>Production experience with Kafka Streams, Flink, or comparable stream-processing frameworks for building real-time data pipelines</li>
</ul>
<ul>
<li>Hands-on ClickHouse (or columnar database) performance tuning and SQL optimization expertise</li>
</ul>
<ul>
<li>Proven success operating AWS-hosted microservices at scale with solid Linux, Docker, and Kubernetes knowledge</li>
</ul>
<ul>
<li>Strong observability mindset including metrics, tracing, alerting, and post-incident analysis capabilities</li>
</ul>
<ul>
<li>Excellent communication skills and a bias toward collaborative problem-solving in cross-functional team environments</li>
</ul>
<p>Desired:</p>
<ul>
<li>Experience migrating single-region services to multi-region active-active topologies for high availability</li>
</ul>
<ul>
<li>Familiarity with data-privacy controls including PII tokenization and field-level encryption</li>
</ul>
<ul>
<li>Previous work in telecom, real-time analytics, or compliance-sensitive domains</li>
</ul>
<ul>
<li>Contributions to open-source Java or streaming projects demonstrating community engagement</li>
</ul>
<p>What We Offer</p>
<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more.</p>
<p>Offerings vary by location.</p>
<p>Twilio thinks big. Do you?</p>
<p>We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things.</p>
<p>That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic.</p>
<p>Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts.</p>
<p>So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>
<p>If this role isn&#39;t what you&#39;re looking for, please consider other open positions.</p>
<p>Twilio is proud to be an equal opportunity employer.</p>
<p>We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics.</p>
<p>We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.</p>
<p>Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.</p>
<p>Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kafka Streams, Flink, ClickHouse, AWS EKS, Spring Boot, Linux, Docker, Kubernetes, DevSecOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7234666</Applyto>
      <Location>Remote - Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>