<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>272750a8-710</externalid>
      <Title>Consultant</Title>
      <Description><![CDATA[<p>As a Consultant at MHP, you will operate infrastructure in AWS using Terraform, create technical concepts for new features and enhancements within a Scrum Team, develop and maintain scalable Java Spring Boot microservices, and work with AWS and Kubernetes.</p>
<p>You will have expertise in backend programming using Java and Spring Boot, experience with AWS, including services like S3, EC2, and Lambda, and experience with Terraform for creating and managing AWS infrastructure.</p>
<p>You will also have experience with tools such as IntelliJ and REST tools (Postman or similar), proficiency in working with Kubernetes for microservices, advanced-level AWS certification, experience with Apache Kafka event streaming, experience working with MongoDB database, and experience working with GitLab CI/CD pipelines.</p>
<p>You will start by arrangement, work full-time (40h) with 27 vacation days, and have an unlimited employment contract. You will need a valid work permit and be fluent in written and spoken English.</p>
<p>At MHP, you will continuously grow with your projects and objectives in an innovative and supportive environment. You will be part of a strong team spirit, where every win, big or small, belongs to all of us. You will welcome curiosity, creativity, and unconventional thinking patterns, and recognize the importance of healthy, tight-knit communities and sustainable environmental changes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring Boot, AWS, Terraform, Kubernetes, IntelliJ, REST tools, Apache Kafka, MongoDB, GitLab CI/CD pipelines</Skills>
      <Category>IT</Category>
      <Industry>Consulting</Industry>
      <Employername>MHP</Employername>
      <Employerlogo>https://logos.yubhub.co/mhp.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitizes its customers&apos; processes and products, supporting them in their IT transformations along the entire value chain. It serves over 300 customers worldwide.</Employerdescription>
      <Employerwebsite>http://www.mhp.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=18226</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>1125d83c-1eb</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product.</p>
<p>This involves writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>
<p>You will be part of a team that builds highly technical products that fulfil real, important needs in the world. We constantly push the boundaries of data and AI technology, while simultaneously operating with the resilience, security and scale that is critical to making customers successful on our platform.</p>
<p>Our engineering teams build one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day.</p>
<p>We run thousands of Kubernetes clusters across all regions and orchestrate millions of VMs on a daily basis.</p>
<p>Competencies:</p>
<ul>
<li>BS/MS/PhD in Computer Science, or a related field</li>
<li>10+ years of production level experience in one of: Java, Scala, C++, or similar language</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables</li>
<li>Experience in architecting, developing, deploying, and operating large scale distributed systems</li>
<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>
<li>Good knowledge of SQL</li>
<li>Experience with software security and systems that handle sensitive data</li>
<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$182,400-$247,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks enables data teams to solve the world&apos;s toughest problems by building and running the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6779233002</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f38b4fcf-88f</externalid>
      <Title>Staff Software Engineer, Organization</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our Organizations team. As a Staff Software Engineer, you will help drive architectural vision and strategy on the team to design and deliver powerful new enterprise functionality for our SaaS customers. You will identify and implement strategic technical improvements to our codebase and architecture, orchestrate and lead major technical projects, and mentor and coach less experienced engineers on sound engineering practices and technical leadership.</p>
<p>You will work closely with the Product Manager and Product Designer to define the look, feel, and functionality of new features and review customer feedback. You will also serve as a subject matter expert on all building scalable, reliable, and maintainable distributed systems.</p>
<p>To be successful in this role, you will need to have solid architectural and security knowledge, backed by experience in designing, implementing, and evolving complex distributed systems. You will also need to have worked on projects that required close collaboration with external teams and have experience making those a success.</p>
<p>You will be a good mentor and communicator, able to explain complex concepts simply in person or in a document. You will know that while an engineer can write code, teams collaborate to ship successful products.</p>
<p>You will have solid previous experience with Node.js (JavaScript or Typescript) to build scalable backend services and creating and maintaining public and internal APIs. You will also have built frontend and full-stack apps and know what approach to use when.</p>
<p>You will have a good understanding of SQL databases and know how to debug and optimize table and query structure for performance under load. You will also have experience with Docker and cloud environments (AWS and Azure preferred).</p>
<p>Bonus points for experience with Kubernetes, knowledge of authentication protocols such as OAuth2, OIDC, SAML, understanding of event-driven architectures, especially Apache Kafka, understanding and experience of DevOps culture, and knowledge of security engineering and application security.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>€74.000-€102.000 EUR</Salaryrange>
      <Skills>Node.js, JavaScript, Typescript, SQL databases, Docker, cloud environments, AWS, Azure, Kubernetes, authentication protocols, OAuth2, OIDC, SAML, event-driven architectures, Apache Kafka, DevOps culture, security engineering, application security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7560775</Applyto>
      <Location>Barcelona, Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3922bc3d-027</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems, from security threat detection to cancer drug development. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>
<p>As a software engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product. This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>
<p>Some example teams you can join include:</p>
<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world-class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>
<p>The ideal candidate will have:</p>
<ul>
<li>BS/MS/PhD in Computer Science, or a related field</li>
<li>10+ years of production-level experience in one of: Java, Scala, C++, or similar language</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables</li>
<li>Experience in architecting, developing, deploying, and operating large-scale distributed systems</li>
<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>
<li>Good knowledge of SQL</li>
<li>Experience with software security and systems that handle sensitive data</li>
<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,000-$260,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks enables data teams to solve the world&apos;s toughest problems by building and running the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6544443002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>93c1356c-a95</externalid>
      <Title>Principal Software Engineer, Web Data - Tech Lead</Title>
      <Description><![CDATA[<p>We&#39;re looking for an exceptional Principal Software Engineer to serve as the de facto Technical Lead for our Web Data Acquisition (WDA) team. This is a highly visible, hands-on technical leadership role where you&#39;ll own the architectural direction for crawling systems, evolve and unify crawling platforms into a best-in-class stack, and elevate a high-performing engineering team.</p>
<p>As a Principal Software Engineer, you&#39;ll solve complex distributed systems challenges, build modular tooling that accelerates delivery, and set the standard for observability and operational excellence. You&#39;ll have a dedicated manager handling all HR and administrative responsibilities. A product manager connects business needs with technical work. Your focus is 100% technical leadership, mentorship, and hands-on execution.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Technical Leadership &amp; System Design: Proven experience building web crawling or large-scale data systems from scratch. Strong architectural skills designing scalable, fault-tolerant distributed systems. Track record leading complex technical initiatives and driving architecture direction for teams.</li>
</ul>
<ul>
<li>Data Engineering Expertise: Deep background in large-scale data engineering (terabytes daily). Hands-on experience with cloud data warehouses (BigQuery, Snowflake). Experience with Apache Kafka, Kubernetes (GKE/EKS), and orchestration tools (Airflow).</li>
</ul>
<ul>
<li>Web Crawling &amp; Data Extraction: Deep expertise in web crawling technologies and advanced scraping (Scrapy or similar). Experience extracting structured/unstructured web data and SERP extraction. Knowledge of proxy infrastructure management, anti-bot detection, and ethical crawling.</li>
</ul>
<ul>
<li>Leadership &amp; Team Development: Experience mentoring engineers at all levels and fostering collaborative culture. Strong ability to influence technical direction and establish best practices. Track record hiring, coaching, and developing senior engineers.</li>
</ul>
<p>Ideal Candidate Profile:</p>
<ul>
<li>10+ years software engineering experience. 5+ years focused on data engineering. 3+ years in senior/principal-level technical leadership.</li>
</ul>
<ul>
<li>Strong CS fundamentals (algorithms, data structures, distributed systems). Self-starter who thrives in fast-paced environments.</li>
</ul>
<p>Core Technical Stack:</p>
<ul>
<li>Python &amp; Java</li>
<li>Apache Kafka</li>
<li>GCP (BigQuery, GKE, Vertex AI)</li>
<li>Snowflake &amp; Starburst/Trino</li>
<li>Terraform</li>
<li>Scrapy / Web Scraping Frameworks</li>
<li>Proxy Management Systems</li>
<li>Distributed Systems &amp; Kubernetes</li>
<li>Apache Airflow</li>
<li>Large-Scale ETL Pipelines</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Python, Java, Apache Kafka, Kubernetes, GCP, Snowflake, Terraform, Scrapy, Proxy Management Systems, Distributed Systems, Apache Airflow, Large-Scale ETL Pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8378092002</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9238107d-204</externalid>
      <Title>Software Architect, Reliability Engineering</Title>
      <Description><![CDATA[<p>Join the team as Twilio&#39;s next Reliability Architect.</p>
<p>As an Architect in SRE, you will drive the technical strategy, vision and outcomes for Twilio&#39;s Reliability Engineering organisation. You will define and lead solutions and initiatives that ensure Twilio products are reliable worldwide, and you will define standards and guide engineering teams on best practices for designing, building, and operating resilient systems.</p>
<p>This role is pivotal to Twilio&#39;s commitment to operational excellence, scalability, and pragmatic, large-scale systems design in the cloud.</p>
<p>Responsibilities:</p>
<ul>
<li>Partner with senior technical leaders across Twilio to set and communicate the reliability strategy, translating business goals into measurable outcomes.</li>
<li>Influence company-wide architectural decisions while balancing long-term vision with near-term and compliance needs.</li>
<li>Lead the design, implementation, and operation of scalable solutions and paved roads that enable reliable, high-traffic services;</li>
<li>Influence company-wide architectural decisions to focus on availability, performance, resilience, and cost efficiency using Kubernetes, AWS, Terraform, and modern observability.</li>
<li>Ensure integrity and quality across the service lifecycle; design fault-tolerant architectures, incident response, disaster recovery, and capacity/cost management.</li>
<li>Collaborate with product and cross-functional teams to identify reliability risks and convert them into actionable designs, programs, and tooling.</li>
<li>Establish and champion reliability practices and drive systemic improvements.</li>
<li>Mentor and grow engineers and technical leaders</li>
<li>Track and apply emerging SRE, cloud, and large-scale systems best practices; introduce pragmatic innovations that improve reliability at scale.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>15+ years of experience in Reliability Engineering, Software Engineering, DevOps roles with a focus on infrastructure, backend systems, and reliability, including as a principal/architect.</li>
<li>Strong experience in driving strategic technical decisions and defining long-term technical vision.</li>
<li>In-depth understanding of the role of Reliability Engineering in a large and diverse SaaS organisation.</li>
<li>Experience driving cross-org technical architecture outcomes.</li>
<li>Knowledge of cloud architecture, devops practices, and large-scale systems design with microservices.</li>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field (or equivalent experience).</li>
<li>Strong production experience, including operational management, scaling, partitioning strategies, and tuning for performance and reliability in high-scale environments.</li>
<li>Hands-on experience with Kubernetes (e.g., EKS), deploying and managing stateful services, and cloud services like AWS.</li>
<li>Proficiency in infrastructure-as-code tools such as Terraform or CloudFormation for automating infrastructure.</li>
<li>Expertise in observability tools (e.g., Prometheus, Grafana, Datadog) for monitoring distributed systems and setting up alerting.</li>
<li>Proficient in at least one programming language (e.g., Go, Python, Java) for building automation and tooling.</li>
<li>Experience designing incident response processes, SLOs/SLIs, runbooks, and participating in on-call rotations.</li>
<li>Experience running cross-functional post-incident reviews and driving improvements.</li>
<li>Strong understanding of distributed systems principles, including consensus, durability, throughput, and availability tradeoffs.</li>
<li>Proven track record of leading reliability improvements in data-intensive or mission-critical systems and collaborating with engineering teams.</li>
<li>Excellent problem-solving, analytical, verbal, and written communication skills, with the ability to work in cross-functional and distributed environments.</li>
<li>Demonstrated leadership in mentoring teams, influencing decisions, and balancing long-term objectives with short-term needs.</li>
<li>Ability to influence and build effective working relationships with all levels of the organisation.</li>
</ul>
<p>Desired:</p>
<ul>
<li>Specific experience owning and operating large AWS footprints.</li>
<li>Knowledge of Kubernetes architecture and concepts.</li>
<li>Experience with data technologies like Apache Kafka, AWS MSK, or similar for reliable streaming.</li>
<li>Passion for building reliable products, with prior projects in high-availability systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$227,840.00 - $284,800.00 per year</Salaryrange>
      <Skills>Reliability Engineering, Software Engineering, DevOps, Cloud Architecture, Microservices, Kubernetes, AWS, Terraform, Observability Tools, Programming Languages, Incident Response, Distributed Systems Principles, Apache Kafka, AWS MSK, Kubernetes Architecture, Data Technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a communications platform that provides cloud communication APIs for building, scaling, and operating real-time communication and collaboration applications.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7658259</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1044456b-79a</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>We are obsessed with enabling data teams to solve the world&#39;s toughest problems. As a software engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product.</p>
<p>This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>
<p>You will be part of one of the following teams:</p>
<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$182,400-$247,000 USD</Salaryrange>
      <Skills>Scala, Java, Apache Spark, Apache Kafka, Cloud APIs (AWS, Azure, CloudFormation, Terraform), SQL, Software security, Cloud technologies (AWS, Azure, GCP, Docker, Kubernetes)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a global organisation that builds and runs the world&apos;s best data and AI infrastructure platform. It was founded in 2013 by the original creators of Apache Spark.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6779232002</Applyto>
      <Location>Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21860f67-527</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>
<p>As a software engineer with a backend focus, you will work closely with your team and product management to prioritize, design, implement, test, and operate micro-services for the Databricks platform and product. This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark™, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>
<p>Some example teams you can join:</p>
<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world-class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>
<p>Competencies:</p>
<p>BS/MS/PhD in Computer Science, or a related field 10+ years of production-level experience in one of: Java, Scala, C++, or similar language Comfortable working towards a multi-year vision with incremental deliverables Experience in architecting, developing, deploying, and operating large-scale distributed systems Experience working on a SaaS platform or with Service-Oriented Architectures Good knowledge of SQL Experience with software security and systems that handle sensitive data Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes.</p>
<p>Pay Range Transparency: The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,000-$260,000 USD</Salaryrange>
      <Skills>Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks enables data teams to solve the world&apos;s toughest problems by building and running the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5408888002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>672557eb-bee</externalid>
      <Title>Engineering Manager, Data Platform</Title>
      <Description><![CDATA[<p><strong>Engineering Manager, Data Platform</strong></p>
<p>We&#39;re looking for an experienced Engineering Manager to lead our Data Interfaces team, responsible for enabling users and systems to leverage our core data platform. The team owns the collection of operational telemetry data, the UI for interacting with the Data Platform, as well as APIs and plugins for querying data out of the Data Platform for visualization, alerting, and integration into internal services.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead, mentor, and grow a team of senior and principal engineers</li>
<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>
<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>
<li>Collaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team&#39;s vision, strategy, and roadmap</li>
<li>Bridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous value</li>
<li>Ensure high standards in system architecture, code quality, and operational excellence</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>
<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>
<li>Deep experience in architecting, building, and operating scalable, distributed data platforms</li>
<li>Strong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systems</li>
<li>Ability to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-day</li>
<li>Hands-on experience with distributed event streaming systems like Apache Kafka</li>
<li>Familiarity with OLAP databases such as Apache Pinot or ClickHouse</li>
<li>Proficient in modern data lake and warehouse tools such as S3, Databricks, or Snowflake</li>
<li>Strong foundation in the .NET ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWS</li>
<li>Experience with distributed data processing engines like Apache Flink or Apache Spark is nice to have</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Epic Games offers a comprehensive benefits package, including:</p>
<ul>
<li>100% coverage of medical, dental, and vision premiums for you and your dependents</li>
<li>Long-term disability and life insurance</li>
<li>401k with competitive match</li>
<li>Unlimited PTO and sick time</li>
<li>Paid sabbatical after 7 years of employment</li>
<li>Robust mental well-being program through Modern Health</li>
<li>Company-wide paid breaks and events throughout the year</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>engineering management, data platform, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools, .NET ecosystem, container orchestration, cloud platforms, Apache Kafka, Apache Pinot, ClickHouse, S3, Databricks, Snowflake, Kubernetes, AWS, Apache Flink, Apache Spark</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Epic Games</Employername>
      <Employerlogo>https://logos.yubhub.co/epicgames.com.png</Employerlogo>
      <Employerdescription>Epic Games is a leading game development company that creates award-winning games and engine technology.</Employerdescription>
      <Employerwebsite>https://www.epicgames.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://www.epicgames.com/en-US/careers/jobs/5818031004</Applyto>
      <Location>Cary</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>f4a0deb2-3f9</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Senior Software Engineer at their Mountain View office. This role sits at the heart of strategic decision-making, turning programmatic advertising data into actionable insights for a company that&#39;s revolutionising global scale data infrastructure. You&#39;ll work directly with leadership to shape the company&#39;s direction in the software development lifecycle.</p>
<p><strong>About the Role</strong></p>
<p>The Budget Optimization Engineering team at Microsoft builds the real-time data infrastructure that powers programmatic advertising at global scale. The team owns Java-based microservices handling budget distribution, campaign discovery and ranking, bid-price optimization, Kafka-based streaming pipelines, and job orchestration — across datacenters. We are looking for a Senior Software Engineer to join this team and drive a significant Azure migration: moving services from legacy infrastructure (Concourse CI, internal Artifactory, Maestro) to modern Azure tooling (Azure DevOps, ACR, AKS).</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and build highly scalable backend services and data pipelines that support privacy-preserving measurement and analytics scenarios using Java, Python (and C# where applicable).</li>
<li>Maintain and improve production services across the optimization platform — including Kafka streaming pipelines, budget controllers, job orchestration (job-broker), and deal monitoring — with a focus on reliability and strict SLA adherence.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>4+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, Go, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Apache Kafka — solid understanding of consumers, producers, offset management, partition strategies, performance tuning, and cross-datacenter replication patterns.</li>
<li>Kubernetes — production experience writing and deploying Helm charts; hands-on with Deployments, StatefulSets, Services, ConfigMaps, Secrets, Jobs, and HPAs; comfortable with multi-cluster and multi-datacenter environments.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong problem-solving skills and ability to work independently.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Python, Kafka, Kubernetes, Azure, C#, Go, Apache Kafka, Kubernetes, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft&apos;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-84/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>eaab676d-0d0</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Senior Software Engineer at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the technology market.</p>
<p><strong>About the Role</strong></p>
<p>The Budget Optimization Engineering team at Microsoft builds the real-time data infrastructure that powers programmatic advertising at global scale. The team owns Java-based microservices handling budget distribution, campaign discovery and ranking, bid-price optimization, Kafka-based streaming pipelines, and job orchestration — across datacenters. We are looking for a Senior Software Engineer to join this team and drive a significant Azure migration: moving services from legacy infrastructure (Concourse CI, internal Artifactory, Maestro) to modern Azure tooling (Azure DevOps, ACR, AKS).</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and build highly scalable backend services and data pipelines that support privacy-preserving measurement and analytics scenarios using Java, Python (and C# where applicable).</li>
<li>Maintain and improve production services across the optimization platform — including Kafka streaming pipelines, budget controllers, job orchestration (job-broker), and deal monitoring — with a focus on reliability and strict SLA adherence.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>4+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, Go, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Apache Kafka — solid understanding of consumers, producers, offset management, partition strategies, performance tuning, and cross-datacenter replication patterns.</li>
<li>Kubernetes — production experience writing and deploying Helm charts; hands-on with Deployments, StatefulSets, Services, ConfigMaps, Secrets, Jobs, and HPAs; comfortable with multi-cluster and multi-datacenter environments.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong problem-solving skills and ability to work independently.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Python, Kafka, Kubernetes, Azure, C#, Go, Apache Kafka, Kubernetes, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft&apos;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-83/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>901a6402-db5</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you need</strong></p>
<ul>
<li>Strong Python and SQL</li>
<li>Hands-on experience with Redshift, Airflow, DBT</li>
<li>Mandatory hands-on experience with Apache Spark (batch and/or structured processing)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Redshift, Airflow, DBT, Apache Spark, Apache Flink, Apache Kafka, Hadoop ecosystem components, ETL design patterns, performance tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global leader in the gaming industry, dedicated to creating cutting-edge products and experiences that define the ultimate gameplay. With a mission to revolutionize the way the world games, Razer is a place to do great work, offering opportunities to make an impact globally while working across a global team located across 5 continents.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Chengdu/Data-Engineer_JR2025006594</Applyto>
      <Location>Chengdu</Location>
      <Country></Country>
      <Postedate>2025-12-26</Postedate>
    </job>
  </jobs>
</source>