{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/apache-kafka"},"x-facet":{"type":"skill","slug":"apache-kafka","display":"Apache Kafka","count":14},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b8870690-5d6"},"title":"Sr. AI Engineer - Player Intelligence and Growth, Data & Insights (D&I)","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. The Data &amp; Insights (D&amp;I) team transforms data into actionable insights that power EA. We are hiring an AI Engineer to join the Player Intelligence &amp; Growth team within Data and Insights (D&amp;I), reporting to a Sr Manager. This team partners with all of EA&#39;s game studios to offer data science &amp; AI products and solutions. For this AI Engineer role we are looking for applied and practical AI/ML expertise with a focus on Gen AI Solutions.</p>\n<p>As a Sr. AI Engineer, you will help scale our internal AI-powered insights tool by partnering with analysts, product teams, marketing, and titles like EA SPORTS FC™, Apex Legends™, The Sims™, and Madden NFL. You will work directly with game teams/partners (internal clients) to understand their offerings/domain and create AI products and solutions to solve for their use cases. You will develop plans to generalize AI products across titles and review AI tools used within the team, providing guidance and being accountable for the success and the adoption of the project/product.</p>\n<p>You will implement feature enhancements for our AI-powered analytics tool using GCP services, LLMs, and our internal tech stack. You will engage with other Data Scientists, Data Analysts sharing best practices and help consult on cross-projects. You will design, improve and work with our data pipeline that transfers and processes petabytes of data using tools, such as: AWS, S3, Kubernetes, GCP, Python, Apache Kafka, Ruby &amp; Hive.</p>\n<p>We are looking for a hands-on engineer with practical experience building AI/ML-driven systems, evaluating emerging tools, and delivering impactful, reusable solutions across multiple domains. You will have a graduate degree in Computer Science, Engineering, AI/ML, or a related quantitative field and 4+ years of experience building AI, ML, or data-driven systems in production environments.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b8870690-5d6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Sr-AI-Engineer-Player-Intelligence-and-Growth-Data-Insights-D-I/211264","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$122,300 - $170,700 CAD","x-skills-required":["Python","SQL","GCP","LLMs","embeddings","retrieval systems","AI agents","CI/CD","microservices","cloud-native deployment patterns"],"x-skills-preferred":["AWS","S3","Kubernetes","Apache Kafka","Ruby & Hive"],"datePosted":"2026-04-24T13:16:11.540Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, GCP, LLMs, embeddings, retrieval systems, AI agents, CI/CD, microservices, cloud-native deployment patterns, AWS, S3, Kubernetes, Apache Kafka, Ruby & Hive","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":122300,"maxValue":170700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_88030e1d-d2f"},"title":"Senior Software Engineer","description":"<p>As a Senior Software Engineer at MHP, you will develop full-stack applications using React and TypeScript on the frontend and Node.js (TypeScript) on the backend. You will also define, deploy, and manage infrastructure using AWS CDK (TypeScript) and design and maintain microservices and event-driven systems using Apache Kafka, SNS, SQS, and EventBridge.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Developing full-stack applications using React and TypeScript on the frontend and Node.js (TypeScript) on the backend</li>\n<li>Defining, deploying, and managing infrastructure using AWS CDK (TypeScript)</li>\n<li>Designing and maintaining microservices and event-driven systems using Apache Kafka, SNS, SQS, and EventBridge</li>\n<li>Ensuring system security, scalability, and observability using tools like IAM, CloudWatch, and X-Ray</li>\n<li>Writing clean, maintainable, and well-documented code</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Senior-level experience working with NodeJS, additional Java experience is an advantage</li>\n<li>Senior-level experience working with frontend technologies such as React and Typescript</li>\n<li>Mid-senior level experience working with AWS Services (S3, Lambdas, API Gateway, Lambda, ECS), Authorization with PPN/Entra-ID (Oauth, OIDC), and Infrastructure as a Code (AWS CDK with Typescript)</li>\n<li>Experience with REST API development</li>\n<li>Hands-on knowledge of responsive UI development and frontend testing</li>\n<li>Hands-on knowledge with CI/CD pipelines with GitLab and test automation</li>\n<li>Problem-solving mindset with the ability to optimize performance and cost management</li>\n<li>Strong communication skills and experience working in cross-functional Agile teams</li>\n<li>Ability to write clean, maintainable, and well-documented code</li>\n<li>Experience in enterprise applications, preferably in the Automotive domain, is a plus</li>\n<li>Bachelor&#39;s Degree in Computer Science or a related field is an advantage</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_88030e1d-d2f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"http://www.mhp.com/","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18149","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["NodeJS","React","TypeScript","AWS CDK","Apache Kafka","SNS","SQS","EventBridge","IAM","CloudWatch","X-Ray"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:14:26.208Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Consulting","skills":"NodeJS, React, TypeScript, AWS CDK, Apache Kafka, SNS, SQS, EventBridge, IAM, CloudWatch, X-Ray"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aad66c6a-ad1"},"title":"Lead Data Scientist - Battlefield, Data and Insights (D&I)","description":"<p>We&#39;re hiring a Lead Data Scientist to join our Data &amp; Insights (D&amp;I) Data Science team. The Data Science team partners with EA studios to build scalable AI/ML solutions that enhance player experience, game design, and live service performance.</p>\n<p>You will bring expertise in the area of AI, ML, and engineering. You will also lead efforts related to life cycle management, progression, in-game economies, and player experience, specifically, within the Battlefield franchise.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Working directly with Battlefield game team/partners to understand their offerings/domain and create data science products and solutions to solve for their use cases.</li>\n</ul>\n<ul>\n<li>Applying problem-driven, AI/ML approaches to improve player experience, engagement, retention, and monetization systems.</li>\n</ul>\n<ul>\n<li>Developing plans to generalize products across the franchise with our engineering partners.</li>\n</ul>\n<ul>\n<li>Establishing rigorous experimental design standards (A/B testing, causal inference, system experimentation) to produce actionable insights.</li>\n</ul>\n<ul>\n<li>Collaborating with engineering partners to productionize models within live environments and gameplay systems.</li>\n</ul>\n<ul>\n<li>Designing and enhancing data pipelines that process petabyte-scale telemetry data using technologies such as AWS, S3, Kubernetes, GCP, Python, Apache Kafka, and Hive.</li>\n</ul>\n<ul>\n<li>Developing algorithms and statistical models for forecasting, player state prediction, churn analysis, progression balancing, and economic system tuning.</li>\n</ul>\n<ul>\n<li>Communicating complex analytical concepts to technical and non-technical partners, influencing strategic decisions.</li>\n</ul>\n<ul>\n<li>Mentoring other data scientists and contributing to shared best practices across the D&amp;I organization.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aad66c6a-ad1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Lead-Data-Scientist-Battlefield-Data-and-Insights-D-I/213127","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$141,400 - $204,400 CAD","x-skills-required":["AI","ML","engineering","data science","AWS","S3","Kubernetes","GCP","Python","Apache Kafka","Hive"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:13:26.748Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI, ML, engineering, data science, AWS, S3, Kubernetes, GCP, Python, Apache Kafka, Hive","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":141400,"maxValue":204400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7e078ceb-e9a"},"title":"Data Engineer","description":"<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have an exciting opportunity for you to join our expanding area of Prognostics.</p>\n<p>Are you enthusiastic to mine raw data and realize its hidden value by building amazing, connected data solutions that benefit our customers? Would you love to accelerate our efforts in implementing advanced physics and ML Models in production?</p>\n<p>The Data Engineer role resides within the Ford’s Electric Vehicle organization. In this role, you will work on building scalable and robust data pipelines to process large volumes of connected vehicle data to support the Ford vehicle prognostic initiatives.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop exceptional analytical data products using both streaming and batch ingestion patterns on Google Cloud Platform with solid data warehouse principles.</li>\n<li>Build data pipelines to monitoring quality of data and performance of analytical models.</li>\n<li>Maintain the infrastructure of the data platform using terraform and continuously develop, evaluate, and deliver code using CI/CD.</li>\n<li>Collaborate with data analytics stakeholders to streamline the data acquisition, processing, and presentation process.</li>\n<li>Implement an enterprise data governance model and actively promote the concept of data - protection, sharing, reuse, quality, and standards.</li>\n<li>Enhance and maintain the DevOps capabilities of the data platform.</li>\n<li>Continuously optimize and enhance existing data solutions (pipelines, products, infrastructure) for best performance, high security, low vulnerability, low costs, and high reliability.</li>\n<li>Work in an agile product team to deliver code frequently using Test Driven Development (TDD), continuous integration and continuous deployment (CI/CD).</li>\n<li>Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle.</li>\n<li>Perform any necessary data mapping, data lineage activities and document information flows.</li>\n<li>Monitor the production pipelines and provide production support by addressing production issues as per SLAs.</li>\n<li>Provide analysis of connected vehicle data to support new product developments and production vehicle improvements.</li>\n<li>Provide visibility to data quality/vehicle/feature issues and work with the business owners to fix the issues.</li>\n<li>Demonstrate technical knowledge and communication skills with the ability to advocate for well-designed solutions.</li>\n<li>Continuously enhance your domain knowledge of connected vehicle data, connected services and algorithms/models developed by data scientists within Ford.</li>\n<li>Stay current on the latest data engineering practices and contribute to the technical direction of the company while keeping a customer-centric approach.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Master’s degree or foreign equivalent degree in Computer Science, Software Engineering, Information Systems, Data Engineering, or a related field, and 4 years of experience OR equivalent combination of education and experience (6+ years with Bachelor&#39;s Degree).</li>\n<li>4 years of professional experience in:</li>\n<li>Data engineering, data product development and software product launches</li>\n<li>At least three of the following languages: Java, Python, Spark, Scala, SQL</li>\n<li>3 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using:</li>\n<li>Data warehouses like Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery.</li>\n<li>Workflow orchestration tools like Airflow.</li>\n<li>Relational Database Management System like MySQL, PostgreSQL, and SQL Server.</li>\n<li>Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub</li>\n<li>Microservices architecture to deliver large-scale real-time data processing application.</li>\n<li>REST APIs for compute, storage, operations, and security.</li>\n<li>DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker.</li>\n<li>Project management tools like Atlassian JIRA.</li>\n</ul>\n<p><strong>Even better if you have...</strong></p>\n<ul>\n<li>Ph.D. or foreign equivalent degree in Computer Science, Software Engineering, Information System, Data Engineering, or a related field.</li>\n<li>2 years of experience with ML Model Development and/or MLOps.</li>\n<li>Committed code to improve open-source data/software engineering projects</li>\n<li>Experience architecting cloud infrastructure and handling application migrations/upgrades.</li>\n<li>GCP Professional Certifications.</li>\n<li>Demonstrated passion to mine raw data and realize its hidden value.</li>\n<li>Passion to experiment/implement state of the art data engineering methods/techniques.</li>\n<li>Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment.</li>\n<li>Experience implementing methods for automation of all parts of the pipeline to minimize labor in development and production.</li>\n<li>Analytics skills to profile data, troubleshoot data pipeline/product issues.</li>\n<li>Ability to simplify, clearly communicate complex data/software ideas/problems and work with cross-functional teams and all levels of management independently.</li>\n</ul>\n<p>Experience Level: mid</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7e078ceb-e9a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Company","sameAs":"https://www.ford.com/","logo":"https://logos.yubhub.co/ford.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/55567","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":"This position is a range of salary grades 6-8.","x-skills-required":["Java","Python","Spark","Scala","SQL","Amazon Redshift","Microsoft Azure Synapse Analytics","Google BigQuery","Airflow","MySQL","PostgreSQL","SQL Server","Apache Kafka","GCP Pub/Sub","Microservices","REST APIs","Tekton","GitHub Actions","Git","GitHub","Terraform","Docker","Atlassian JIRA"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:24:19.099Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Java, Python, Spark, Scala, SQL, Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery, Airflow, MySQL, PostgreSQL, SQL Server, Apache Kafka, GCP Pub/Sub, Microservices, REST APIs, Tekton, GitHub Actions, Git, GitHub, Terraform, Docker, Atlassian JIRA"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_653bca90-18d"},"title":"Engineering Manager, Organizations (Auth0)","description":"<p>We are looking for an experienced Engineering Manager to lead our Organizations team. As an Engineering Manager, you will be responsible for managing a team of 9 remote engineers, mentoring and coaching them to achieve their goals. You will work closely with the Product Manager to plan and deliver the team&#39;s quarterly and annual roadmap. You will also be responsible for owning and being accountable for the quality of the team&#39;s technical estate, effectively managing technical debt, addressing security vulnerabilities, and ensuring wider cross-team technical initiatives are delivered in a timely manner.</p>\n<p>The ideal candidate will have experience growing engineers to the next level, bringing off-track engineers back on track, and working on projects that require close collaboration with external teams. They will also have solid architectural knowledge, backed by experience in designing, implementing, and evolving complex distributed systems.</p>\n<p>In particular, you will be able to spot areas where scalability and performance might be affected. You will know how to track and steer a project to successful and timely delivery. Experience in authentication protocols such as OAuth2, OIDC, SAML, and understanding of event-driven architectures, especially Apache Kafka, is a plus.</p>\n<p>As an Engineering Manager at Okta, you will have the opportunity to work on a wide range of challenging projects, collaborate with a talented team of engineers, and contribute to the growth and success of the company.</p>\n<p>If you are a motivated and experienced engineer looking for a new challenge, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_653bca90-18d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7843717","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$168,000-$231,000 CAD","x-skills-required":["NodeJS","JavaScript","TypeScript","PostgreSQL","AWS","Azure","Containers","Authentication protocols","Event-driven architectures"],"x-skills-preferred":["OAuth2","OIDC","SAML","Apache Kafka"],"datePosted":"2026-04-24T12:18:53.914Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"NodeJS, JavaScript, TypeScript, PostgreSQL, AWS, Azure, Containers, Authentication protocols, Event-driven architectures, OAuth2, OIDC, SAML, Apache Kafka","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":168000,"maxValue":231000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c0c30c21-9ae"},"title":"Staff Software Engineer, Data Engineering","description":"<p>You&#39;ll own Gamma&#39;s data infrastructure and architecture as we scale to hundreds of millions of users and petabytes of data. This means defining the technical strategy for our end-to-end event pipeline architecture, designing distributed systems that handle massive scale with reliability, and establishing the foundation for how data flows through Gamma.</p>\n<p>As a Staff Data Engineer, you&#39;ll balance hands-on engineering with technical leadership. You&#39;ll architect solutions for orders of magnitude growth, mentor engineers across the organization, and drive strategic decisions about our data stack. You&#39;ll work closely with analytics, product, and engineering leadership to enable data-driven decision making at scale while building systems that serve millions of users and inform critical business decisions.</p>\n<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own and evolve our end-to-end event pipeline architecture, from Kafka ingestion through Snowflake analytics, setting technical direction for data infrastructure</li>\n<li>Design and architect distributed data systems that scale to orders of magnitude more data volume while maintaining world-class query performance</li>\n<li>Lead initiatives to build and optimize CDC (change data capture) pipelines and streaming data transformations at massive scale</li>\n<li>Establish best practices for data quality, pipeline reliability, and system observability across the organization</li>\n<li>Drive strategic technical decisions about data modeling, infrastructure architecture, and technology choices</li>\n<li>Mentor engineers and elevate data engineering practices across analytics, product, and engineering teams</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>10+ years as a data or software engineer with deep expertise in distributed systems, data infrastructure, and high-growth SaaS products at massive scale</li>\n<li>Expert-level knowledge of Apache Kafka (producers, consumers, Kafka Connect, stream processing) and event streaming platforms</li>\n<li>Extensive hands-on experience with Snowflake, including performance optimization, cost management, and data modeling; strong foundation in Postgres, CDC patterns, and replication strategies</li>\n<li>Proven track record architecting and leading major data infrastructure initiatives through orders-of-magnitude growth</li>\n<li>Experience establishing best practices and driving technical strategy across organizations</li>\n<li>Strong communication skills with a history of influencing technical direction across engineering, analytics, and leadership</li>\n<li>Proficiency with dbt, Terraform, and working knowledge of data governance, privacy compliance (GDPR, CCPA), and security best practices</li>\n</ul>\n<p><strong>Compensation Range</strong></p>\n<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c0c30c21-9ae","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gamma","sameAs":"https://gamma.com","logo":"https://logos.yubhub.co/gamma.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/gamma/4b2c97d1-b12b-46b7-9e24-1fcd248e28a3","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"Full time","x-salary-range":"$230K - $310K","x-skills-required":["Apache Kafka","Snowflake","Postgres","dbt","Terraform","data governance","privacy compliance","security best practices"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:17:12.124Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Kafka, Snowflake, Postgres, dbt, Terraform, data governance, privacy compliance, security best practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3513ac8f-9c4"},"title":"Staff Software Engineer, PostgreSQL","description":"<p>You&#39;ll own Gamma&#39;s PostgreSQL infrastructure as we scale from 70 million users to hundreds of millions, and from terabytes of data to hundreds of terabytes. Your job is to make sure our database can handle orders of magnitude more usage without compromising performance.</p>\n<p>This is a deeply technical, hands-on role. You&#39;ll read and write code daily, dig into low-level systems, debug complex issues across massive datasets, and work on both core database scaling projects and application features. You&#39;ll collaborate closely with backend engineers, data engineers, and infrastructure teams to ensure our database architecture keeps pace with Gamma&#39;s growth.</p>\n<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Architect and implement solutions for horizontally scaling PostgreSQL to hundreds of millions of users and hundreds of terabytes of data</li>\n</ul>\n<ul>\n<li>Own database performance, availability, and reliability as usage grows by orders of magnitude</li>\n</ul>\n<ul>\n<li>Debug complex issues across very large datasets and optimize query performance at scale</li>\n</ul>\n<ul>\n<li>Establish best practices for database design, query optimization, and data modeling across engineering</li>\n</ul>\n<ul>\n<li>Work across core infrastructure and application features that depend on database architecture</li>\n</ul>\n<ul>\n<li>Collaborate with backend, data, and infrastructure engineers to align database strategy with product needs</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>10+ years of software engineering experience with deep expertise in large-scale relational database systems, including hands-on experience managing hundreds of terabytes of data in production</li>\n</ul>\n<ul>\n<li>Expert-level understanding of PostgreSQL (or comparable relational databases), horizontal scaling techniques such as sharding and partitioning, and complex query tuning</li>\n</ul>\n<ul>\n<li>Strong programming skills in at least one backend language, with experience writing and maintaining highly available web APIs</li>\n</ul>\n<ul>\n<li>Experience with large-scale event streaming systems, preferably Apache Kafka</li>\n</ul>\n<ul>\n<li>Ability to explain complex technical concepts clearly to engineers across teams</li>\n</ul>\n<ul>\n<li>Familiarity with TypeScript, Prisma, Apollo GraphQL, Terraform, AWS, or AI/LLM tooling (Nice to have)</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3513ac8f-9c4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gamma","sameAs":"https://gamma.com","logo":"https://logos.yubhub.co/gamma.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/gamma/f672c729-457f-4143-80e9-363ddf8a0870","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"Full time","x-salary-range":"$230K - $310K","x-skills-required":["PostgreSQL","horizontal scaling","sharding","partitioning","complex query tuning","backend language","web APIs","Apache Kafka"],"x-skills-preferred":["TypeScript","Prisma","Apollo GraphQL","Terraform","AWS","AI/LLM tooling"],"datePosted":"2026-04-24T12:16:45.597Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, horizontal scaling, sharding, partitioning, complex query tuning, backend language, web APIs, Apache Kafka, TypeScript, Prisma, Apollo GraphQL, Terraform, AWS, AI/LLM tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_272750a8-710"},"title":"Consultant","description":"<p>As a Consultant at MHP, you will operate infrastructure in AWS using Terraform, create technical concepts for new features and enhancements within a Scrum Team, develop and maintain scalable Java Spring Boot microservices, and work with AWS and Kubernetes.</p>\n<p>You will have expertise in backend programming using Java and Spring Boot, experience with AWS, including services like S3, EC2, and Lambda, and experience with Terraform for creating and managing AWS infrastructure.</p>\n<p>You will also have experience with tools such as IntelliJ and REST tools (Postman or similar), proficiency in working with Kubernetes for microservices, advanced-level AWS certification, experience with Apache Kafka event streaming, experience working with MongoDB database, and experience working with GitLab CI/CD pipelines.</p>\n<p>You will start by arrangement, work full-time (40h) with 27 vacation days, and have an unlimited employment contract. You will need a valid work permit and be fluent in written and spoken English.</p>\n<p>At MHP, you will continuously grow with your projects and objectives in an innovative and supportive environment. You will be part of a strong team spirit, where every win, big or small, belongs to all of us. You will welcome curiosity, creativity, and unconventional thinking patterns, and recognize the importance of healthy, tight-knit communities and sustainable environmental changes.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_272750a8-710","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"http://www.mhp.com/","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18226","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Spring Boot","AWS","Terraform","Kubernetes","IntelliJ","REST tools","Apache Kafka","MongoDB","GitLab CI/CD pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:25:42.569Z","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Consulting","skills":"Java, Spring Boot, AWS, Terraform, Kubernetes, IntelliJ, REST tools, Apache Kafka, MongoDB, GitLab CI/CD pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1125d83c-1eb"},"title":"Staff Software Engineer - Backend","description":"<p>As a Staff Software Engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product.</p>\n<p>This involves writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>\n<p>You will be part of a team that builds highly technical products that fulfil real, important needs in the world. We constantly push the boundaries of data and AI technology, while simultaneously operating with the resilience, security and scale that is critical to making customers successful on our platform.</p>\n<p>Our engineering teams build one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day.</p>\n<p>We run thousands of Kubernetes clusters across all regions and orchestrate millions of VMs on a daily basis.</p>\n<p>Competencies:</p>\n<ul>\n<li>BS/MS/PhD in Computer Science, or a related field</li>\n<li>10+ years of production level experience in one of: Java, Scala, C++, or similar language</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Experience in architecting, developing, deploying, and operating large scale distributed systems</li>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>\n<li>Good knowledge of SQL</li>\n<li>Experience with software security and systems that handle sensitive data</li>\n<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1125d83c-1eb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6779233002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$182,400-$247,000 USD","x-skills-required":["Java","Scala","C++","Apache Spark","Apache Kafka","Cloud APIs","AWS","Azure","CloudFormation","Terraform","SQL","Software security","Cloud technologies"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:07.479Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":182400,"maxValue":247000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3922bc3d-027"},"title":"Staff Software Engineer - Backend","description":"<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems, from security threat detection to cancer drug development. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>\n<p>As a software engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product. This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>\n<p>Some example teams you can join include:</p>\n<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world-class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>\n<p>The ideal candidate will have:</p>\n<ul>\n<li>BS/MS/PhD in Computer Science, or a related field</li>\n<li>10+ years of production-level experience in one of: Java, Scala, C++, or similar language</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Experience in architecting, developing, deploying, and operating large-scale distributed systems</li>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>\n<li>Good knowledge of SQL</li>\n<li>Experience with software security and systems that handle sensitive data</li>\n<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3922bc3d-027","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6544443002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$192,000-$260,000 USD","x-skills-required":["Java","Scala","C++","Apache Spark","Apache Kafka","Cloud APIs","AWS","Azure","CloudFormation","Terraform","SQL","Software security","Cloud technologies"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:24.664Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9238107d-204"},"title":"Software Architect, Reliability Engineering","description":"<p>Join the team as Twilio&#39;s next Reliability Architect.</p>\n<p>As an Architect in SRE, you will drive the technical strategy, vision and outcomes for Twilio&#39;s Reliability Engineering organisation. You will define and lead solutions and initiatives that ensure Twilio products are reliable worldwide, and you will define standards and guide engineering teams on best practices for designing, building, and operating resilient systems.</p>\n<p>This role is pivotal to Twilio&#39;s commitment to operational excellence, scalability, and pragmatic, large-scale systems design in the cloud.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with senior technical leaders across Twilio to set and communicate the reliability strategy, translating business goals into measurable outcomes.</li>\n<li>Influence company-wide architectural decisions while balancing long-term vision with near-term and compliance needs.</li>\n<li>Lead the design, implementation, and operation of scalable solutions and paved roads that enable reliable, high-traffic services;</li>\n<li>Influence company-wide architectural decisions to focus on availability, performance, resilience, and cost efficiency using Kubernetes, AWS, Terraform, and modern observability.</li>\n<li>Ensure integrity and quality across the service lifecycle; design fault-tolerant architectures, incident response, disaster recovery, and capacity/cost management.</li>\n<li>Collaborate with product and cross-functional teams to identify reliability risks and convert them into actionable designs, programs, and tooling.</li>\n<li>Establish and champion reliability practices and drive systemic improvements.</li>\n<li>Mentor and grow engineers and technical leaders</li>\n<li>Track and apply emerging SRE, cloud, and large-scale systems best practices; introduce pragmatic innovations that improve reliability at scale.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>15+ years of experience in Reliability Engineering, Software Engineering, DevOps roles with a focus on infrastructure, backend systems, and reliability, including as a principal/architect.</li>\n<li>Strong experience in driving strategic technical decisions and defining long-term technical vision.</li>\n<li>In-depth understanding of the role of Reliability Engineering in a large and diverse SaaS organisation.</li>\n<li>Experience driving cross-org technical architecture outcomes.</li>\n<li>Knowledge of cloud architecture, devops practices, and large-scale systems design with microservices.</li>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field (or equivalent experience).</li>\n<li>Strong production experience, including operational management, scaling, partitioning strategies, and tuning for performance and reliability in high-scale environments.</li>\n<li>Hands-on experience with Kubernetes (e.g., EKS), deploying and managing stateful services, and cloud services like AWS.</li>\n<li>Proficiency in infrastructure-as-code tools such as Terraform or CloudFormation for automating infrastructure.</li>\n<li>Expertise in observability tools (e.g., Prometheus, Grafana, Datadog) for monitoring distributed systems and setting up alerting.</li>\n<li>Proficient in at least one programming language (e.g., Go, Python, Java) for building automation and tooling.</li>\n<li>Experience designing incident response processes, SLOs/SLIs, runbooks, and participating in on-call rotations.</li>\n<li>Experience running cross-functional post-incident reviews and driving improvements.</li>\n<li>Strong understanding of distributed systems principles, including consensus, durability, throughput, and availability tradeoffs.</li>\n<li>Proven track record of leading reliability improvements in data-intensive or mission-critical systems and collaborating with engineering teams.</li>\n<li>Excellent problem-solving, analytical, verbal, and written communication skills, with the ability to work in cross-functional and distributed environments.</li>\n<li>Demonstrated leadership in mentoring teams, influencing decisions, and balancing long-term objectives with short-term needs.</li>\n<li>Ability to influence and build effective working relationships with all levels of the organisation.</li>\n</ul>\n<p>Desired:</p>\n<ul>\n<li>Specific experience owning and operating large AWS footprints.</li>\n<li>Knowledge of Kubernetes architecture and concepts.</li>\n<li>Experience with data technologies like Apache Kafka, AWS MSK, or similar for reliable streaming.</li>\n<li>Passion for building reliable products, with prior projects in high-availability systems</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9238107d-204","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7658259","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$227,840.00 - $284,800.00 per year","x-skills-required":["Reliability Engineering","Software Engineering","DevOps","Cloud Architecture","Microservices","Kubernetes","AWS","Terraform","Observability Tools","Programming Languages","Incident Response","Distributed Systems Principles"],"x-skills-preferred":["Apache Kafka","AWS MSK","Kubernetes Architecture","Data Technologies"],"datePosted":"2026-04-18T15:42:56.209Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Reliability Engineering, Software Engineering, DevOps, Cloud Architecture, Microservices, Kubernetes, AWS, Terraform, Observability Tools, Programming Languages, Incident Response, Distributed Systems Principles, Apache Kafka, AWS MSK, Kubernetes Architecture, Data Technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":227840,"maxValue":284800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1044456b-79a"},"title":"Staff Software Engineer - Backend","description":"<p>We are obsessed with enabling data teams to solve the world&#39;s toughest problems. As a software engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product.</p>\n<p>This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>\n<p>You will be part of one of the following teams:</p>\n<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1044456b-79a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6779232002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$182,400-$247,000 USD","x-skills-required":["Scala","Java","Apache Spark","Apache Kafka","Cloud APIs (AWS, Azure, CloudFormation, Terraform)","SQL","Software security","Cloud technologies (AWS, Azure, GCP, Docker, Kubernetes)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:26.705Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Java, Apache Spark, Apache Kafka, Cloud APIs (AWS, Azure, CloudFormation, Terraform), SQL, Software security, Cloud technologies (AWS, Azure, GCP, Docker, Kubernetes)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":182400,"maxValue":247000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_21860f67-527"},"title":"Staff Software Engineer - Backend","description":"<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>\n<p>As a software engineer with a backend focus, you will work closely with your team and product management to prioritize, design, implement, test, and operate micro-services for the Databricks platform and product. This implies, among others, writing software in Scala/Java, building data pipelines (Apache Spark™, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>\n<p>Some example teams you can join:</p>\n<p>Data Science and Machine Learning Infrastructure: Build services and infrastructure at the intersection of machine learning and distributed systems. Compute Fabric: Build the resource management infrastructure powering all the big data and machine learning workloads on the Databricks platform in a robust, flexible, secure, and cloud-agnostic way. Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Enterprise Platform: Offer a simple and powerful experience for onboarding and managing all of their data teams across 10ks of users on the Databricks platform. Observability: Provide a world-class platform for Databricks engineers to comprehensively observe and introspect their applications and services. Service Platform: Build high-quality services and manage the services in all environments in a unified way. Core Infra: Build the core infrastructure that powers Databricks, making it available across all geographic regions and Cloud providers.</p>\n<p>Competencies:</p>\n<p>BS/MS/PhD in Computer Science, or a related field 10+ years of production-level experience in one of: Java, Scala, C++, or similar language Comfortable working towards a multi-year vision with incremental deliverables Experience in architecting, developing, deploying, and operating large-scale distributed systems Experience working on a SaaS platform or with Service-Oriented Architectures Good knowledge of SQL Experience with software security and systems that handle sensitive data Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes.</p>\n<p>Pay Range Transparency: The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_21860f67-527","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/5408888002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$192,000-$260,000 USD","x-skills-required":["Java","Scala","C++","Apache Spark","Apache Kafka","Cloud APIs","AWS","Azure","CloudFormation","Terraform","SQL","Software security","Cloud technologies"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:55.276Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_901a6402-db5"},"title":"Data Engineer","description":"<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Strong Python and SQL</li>\n<li>Hands-on experience with Redshift, Airflow, DBT</li>\n<li>Mandatory hands-on experience with Apache Spark (batch and/or structured processing)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_901a6402-db5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://razer.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Chengdu/Data-Engineer_JR2025006594","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Redshift","Airflow","DBT","Apache Spark"],"x-skills-preferred":["Apache Flink","Apache Kafka","Hadoop ecosystem components","ETL design patterns","performance tuning"],"datePosted":"2025-12-26T10:57:30.602Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chengdu"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Redshift, Airflow, DBT, Apache Spark, Apache Flink, Apache Kafka, Hadoop ecosystem components, ETL design patterns, performance tuning"}]}