{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/apache"},"x-facet":{"type":"skill","slug":"apache","display":"Apache","count":100},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d384afb9-a9d"},"title":"Technical Engagement Manager","description":"<p>We are seeking a highly skilled and experienced Technical Engagement Manager to join our dynamic team. You will be responsible for working closely with a few of our largest customers to understand their business challenges and requirements, architecting solutions using Starburst products and driving business outcomes across the customer journey, from initial engagement to successful adoption.</p>\n<p>As a Technical Engagement Manager, you will establish trust and credibility by demonstrating Data &amp; AI industry knowledge, understanding of the buyer&#39;s organization, and a track record of successful engagements. You will build and nurture strong relationships with the champion, who serves as the internal advocate for the engagement within the customer organization.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Establishing trust and credibility with customers</li>\n<li>Building and nurturing strong relationships with champions</li>\n<li>Offering support and guidance to champions throughout the engagement process</li>\n<li>Proactively addressing concerns and objections raised by other stakeholders</li>\n<li>Soliciting feedback from other stakeholders throughout the engagement process</li>\n<li>Collaborating with sales teams to understand customer needs and objectives</li>\n<li>Driving adoption of Starburst culminating in the Customer reaching its success criteria</li>\n</ul>\n<p>Some of the things we look for include:</p>\n<ul>\n<li>A Bachelor&#39;s degree in business, technology, or a related field</li>\n<li>A deep understanding of data architecture principles, including data modeling, data integration, and data warehousing</li>\n<li>Proficiency in SQL and experience with distributed query engines (e.g., Presto, Trino, Apache Spark)</li>\n<li>Strong problem-solving skills and the ability to think strategically about business challenges and technical solutions</li>\n<li>A proven track record of successfully managing customer engagements and delivering business outcomes</li>\n<li>Excellent communication and interpersonal skills, with the ability to build strong relationships with customers and internal teams</li>\n</ul>\n<p>We offer a competitive salary range of $155,000-$190,000 USD, depending on relevant skills, experience, education, and training, and specific work location. All employees receive equity packages (ISOs) and have access to a comprehensive benefits offering.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d384afb9-a9d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Starburst","sameAs":"https://www.starburst.io/","logo":"https://logos.yubhub.co/starburst.io.png"},"x-apply-url":"https://job-boards.greenhouse.io/starburst/jobs/5196535008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$155,000-$190,000 USD","x-skills-required":["SQL","Presto","Trino","Apache Spark","Data architecture principles","Data modeling","Data integration","Data warehousing"],"x-skills-preferred":[],"datePosted":"2026-04-24T16:11:50.314Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Charlotte, NC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Presto, Trino, Apache Spark, Data architecture principles, Data modeling, Data integration, Data warehousing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":155000,"maxValue":190000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_49ef318f-90a"},"title":"Director, Site Reliability Engineer | Senior Engineering Team Director","description":"<p>We&#39;re seeking a Site Reliability Engineering (SRE) Lead to design, build, and maintain resilient, high-scale systems supporting BlackRock&#39;s Private Markets platform. In this hands-on leadership role, you&#39;ll apply deep engineering expertise to solve complex challenges, guide a global team, shape technical direction, and communicate effectively with senior stakeholders,ensuring the reliability of mission-critical systems that power private market investment workflows and decision-making. You will drive the adoption of AI-driven solutions to accelerate incident detection and triage, reduce toil, improve forecasting and capacity planning, and strengthen end-to-end observability and resilience.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Take ownership of project priorities, deadlines and deliverables using Agile methodologies, with clear outcomes around reliability automation and AI-enabled operations</li>\n<li>Understand and refine business and functional requirements, translating them into SLOs/SLIs and AI-assisted observability and support capabilities</li>\n<li>Hands on approach to getting work done,this role requires a “roll your sleeves up” mentality, including building and operationalizing reliability tooling and automation that measurably reduces toil and improves stability</li>\n<li>Be a leader with vision and a partner in brainstorming solutions for team productivity and efficiency to improve engineering effectiveness</li>\n<li>Drive priority setting of the engineering teams, balancing foundational reliability work with delivery of new product features</li>\n<li>Improve Engineering culture by encouraging continuous focus on reliability across the entire application lifecycle, and by adopting AI-enabled SRE practices (e.g., intelligent alerting, automated diagnosis, and self-healing where appropriate)</li>\n<li>Proactive participant in architectural and design decisions, including AI-ready telemetry, data quality, and model integration patterns for operational analytics</li>\n<li>Design and implement end-to-end monitoring solutions for application and infrastructure components, leveraging modern observability platforms plus AI/ML techniques for anomaly detection, correlation, and alert noise reduction</li>\n<li>Drive the engineering of capacity management and demand forecasting solutions, including predictive analytics/ML approaches where they add measurable value</li>\n<li>Act as a culture carrier and leader, passing on SRE knowledge and best practices to the engineering team</li>\n<li>Drive detailed root cause investigations for production incidents with rigorous focus on issue avoidance, using AI-assisted correlation/analysis to accelerate time-to-insight</li>\n<li>Create/coordinate retros for significant incidents, ensuring learnings are captured in automated/AI-assisted runbooks and embedded into prevention mechanisms</li>\n<li>Additional core engineering functions, such as adding custom telemetry metrics/logs/traces to the code base of in-scope applications to enable AI/ML-driven operational insights</li>\n<li>Anticipate new opportunities to continuously evolve the resiliency profile of scoped applications and infrastructure</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>B.S. / M.S. degree in Computer Science, Engineering or a related discipline with 10+ years of experience</li>\n<li>Experience leading high performing engineering/SRE teams, with a track record of driving continuous improvement through automation and AI-enabled operations</li>\n<li>Demonstrated ability to represent engineering/SRE priorities, status, and risk to senior leadership stakeholders with clear, executive-ready communication</li>\n<li>Hands-on experience building or operating AI-assisted capabilities (AIOps, ML-based anomaly detection, or GenAI workflows) in an engineering/production environment</li>\n<li>A passion for providing engineering support for highly available, performant full stack applications with a “Student of Technology” attitude</li>\n<li>Experience with relational database and NoSQL Database (e.g. Redis, Apache Cassandra)</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Retirement investment and tools designed to help you in building a sound financial future</li>\n<li>Access to education reimbursement</li>\n<li>Comprehensive resources to support your physical health and emotional well-being</li>\n<li>Family support programs</li>\n<li>Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about</li>\n</ul>\n<p>Hybrid Work Model:</p>\n<ul>\n<li>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all</li>\n<li>Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week</li>\n<li>Some business groups may require more time in the office due to their roles and responsibilities</li>\n<li>We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation</li>\n</ul>\n<p>About BlackRock:</p>\n<ul>\n<li>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being</li>\n<li>Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses</li>\n<li>Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_49ef318f-90a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com/","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/cLBuSgz7avHiG3cKzS91ZB/director%2C-site-reliability-engineer-%7C-senior-engineering-team-director-in-england-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Site Reliability Engineering","Agile Methodologies","Reliability Automation","AI-Enabled Operations","Business Requirements","Functional Requirements","SLOs/SLIs","Observability","Support Capabilities","Reliability Tooling","Automation","Stability","Leadership","Vision","Team Productivity","Efficiency","Engineering Effectiveness","Priority Setting","Foundational Reliability","New Product Features","Engineering Culture","Reliability Across Application Lifecycle","AI-Enabled SRE Practices","Intelligent Alerting","Automated Diagnosis","Self-Healing","Architectural Decisions","AI-Ready Telemetry","Data Quality","Model Integration Patterns","Operational Analytics","Monitoring Solutions","Application Components","Infrastructure Components","Anomaly Detection","Correlation","Alert Noise Reduction","Capacity Management","Demand Forecasting","Predictive Analytics","ML Approaches","Root Cause Investigations","Production Incidents","Issue Avoidance","AI-Assisted Correlation","Time-To-Insight","Retros","Significant Incidents","Learnings","Runbooks","Prevention Mechanisms","Custom Telemetry Metrics","Logs","Traces","AI/ML-Driven Operational Insights","Resiliency Profile","Scoped Applications","Infrastructure","Relational Database","NoSQL Database","Redis","Apache Cassandra"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:19:53.538Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"England"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Site Reliability Engineering, Agile Methodologies, Reliability Automation, AI-Enabled Operations, Business Requirements, Functional Requirements, SLOs/SLIs, Observability, Support Capabilities, Reliability Tooling, Automation, Stability, Leadership, Vision, Team Productivity, Efficiency, Engineering Effectiveness, Priority Setting, Foundational Reliability, New Product Features, Engineering Culture, Reliability Across Application Lifecycle, AI-Enabled SRE Practices, Intelligent Alerting, Automated Diagnosis, Self-Healing, Architectural Decisions, AI-Ready Telemetry, Data Quality, Model Integration Patterns, Operational Analytics, Monitoring Solutions, Application Components, Infrastructure Components, Anomaly Detection, Correlation, Alert Noise Reduction, Capacity Management, Demand Forecasting, Predictive Analytics, ML Approaches, Root Cause Investigations, Production Incidents, Issue Avoidance, AI-Assisted Correlation, Time-To-Insight, Retros, Significant Incidents, Learnings, Runbooks, Prevention Mechanisms, Custom Telemetry Metrics, Logs, Traces, AI/ML-Driven Operational Insights, Resiliency Profile, Scoped Applications, Infrastructure, Relational Database, NoSQL Database, Redis, Apache Cassandra"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_73381153-35f"},"title":"Security Developer - Associate","description":"<p>About this role</p>\n<p>Data Engineer – AWS Native Data Platforms</p>\n<p>Technology &amp; Operations | BlackRock</p>\n<p>We are looking for a Data Engineer to join BlackRock’s Technology &amp; Operations organization, supporting the design, build, and operation of our Cloud-native data platform that powers critical technology, security, and operational use cases across the firm.</p>\n<p>This role sits within a team responsible for building reliable, secure, and observable data pipelines in a highly regulated environment. You will work closely with technology, operations, and information security partners to deliver data products that enable transparency, automation, and risk-informed decision making at scale.</p>\n<p>The ideal candidate is an engineer at heart,comfortable working end-to-end across ingestion, transformation, orchestration, and governance,who values clean design, strong documentation, and operational excellence.</p>\n<p>What You’ll Do</p>\n<ul>\n<li>Design, build, and maintain AWS-native data pipelines for batch and event-driven workloads, with a focus on reliability, scalability, and security.</li>\n</ul>\n<ul>\n<li>Develop and operate data workflows using Apache Airflow for orchestration and Python and SQL for transformation and data quality logic.</li>\n</ul>\n<ul>\n<li>Implement data transformations and models using modern analytics engineering practices (e.g., dbt-style patterns, tested transformations, incremental processing).</li>\n</ul>\n<ul>\n<li>Integrate data from a variety of enterprise sources, including cloud services, internal platforms, APIs, and security/operational telemetry.</li>\n</ul>\n<ul>\n<li>Partner with Information Security, Risk, and Operations teams to translate business and control requirements into durable data solutions.</li>\n</ul>\n<ul>\n<li>Embed data quality, lineage, and observability into pipelines using testing frameworks and monitoring standards.</li>\n</ul>\n<ul>\n<li>Operate within BlackRock’s cloud security and governance standards, including IAM, encryption, logging, and secrets management.</li>\n</ul>\n<ul>\n<li>Contribute to CI/CD pipelines, infrastructure-as-code patterns, and standardized platform tooling.</li>\n</ul>\n<ul>\n<li>Document data products, pipelines, and operating procedures to support transparency and long-term maintainability.</li>\n</ul>\n<ul>\n<li>Participate in design reviews, code reviews, and incident/post-incident analysis to continuously improve platform resilience.</li>\n</ul>\n<p>Core Technologies You’ll Work With</p>\n<ul>\n<li>AWS: S3, IAM, Glue, Lambda, Step Functions, CloudWatch, Secrets Manager, OpenSearch, and related native services</li>\n</ul>\n<ul>\n<li>Orchestration: Apache Airflow</li>\n</ul>\n<ul>\n<li>Languages: Python, SQL</li>\n</ul>\n<ul>\n<li>Data Modeling &amp; Transformation: Analytics-engineering patterns (e.g., dbt-like workflows)</li>\n</ul>\n<ul>\n<li>Data Quality &amp; Testing: Schema and data validation frameworks (e.g., Great Expectations-style approaches)</li>\n</ul>\n<ul>\n<li>Infrastructure &amp; Delivery: CI/CD, Git-based workflows, infrastructure-as-code (Terraform or equivalent)</li>\n</ul>\n<ul>\n<li>Security &amp; Governance: Encryption, access controls, audit logging, platform security baselines</li>\n</ul>\n<p>What We’re Looking For</p>\n<ul>\n<li>3-6 years experience as a Data Engineer, Analytics Engineer, or similar role building production data pipelines.</li>\n</ul>\n<ul>\n<li>Strong hand-on experience with AWS-native data services in a regulated or enterprise environment.</li>\n</ul>\n<ul>\n<li>Proficiency in Python and SQL, with an emphasis on readable, testable, and maintainable code.</li>\n</ul>\n<ul>\n<li>Experience with workflow orchestration (Airflow or equivalent).</li>\n</ul>\n<ul>\n<li>Solid understanding of data modeling, incremental processing, and performance optimization.</li>\n</ul>\n<ul>\n<li>Familiarity with data quality, monitoring, and operational support for production data systems.</li>\n</ul>\n<ul>\n<li>Experience collaborating with cross-functional partners (e.g., security, operations, product, or risk teams).</li>\n</ul>\n<ul>\n<li>A disciplined approach to documentation, change management, and incident response.</li>\n</ul>\n<p>Nice to Have</p>\n<ul>\n<li>Experience supporting security, risk, or compliance data domains.</li>\n</ul>\n<ul>\n<li>Exposure to OpenSearch / Elasticsearch, metrics pipelines, or log-analytics platforms.</li>\n</ul>\n<ul>\n<li>Familiarity with cloud security controls, IAM design, and secrets management.</li>\n</ul>\n<ul>\n<li>Experience building data platforms that support executive-level reporting or regulatory oversight.</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged, and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge, and be there for the people you care about.</p>\n<p>Our hybrid work model</p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_73381153-35f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com/","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/noeyyV7CbztGYxPetLe2Cu/security-developer---associate-in-edinburgh-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","Apache Airflow","Python","SQL","Data Modeling & Transformation","Data Quality & Testing","Infrastructure & Delivery","Security & Governance"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:18:02.924Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Edinburgh"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"AWS, Apache Airflow, Python, SQL, Data Modeling & Transformation, Data Quality & Testing, Infrastructure & Delivery, Security & Governance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_086c2470-e2b"},"title":"Lead Developer - Vice President","description":"<p>About this role</p>\n<p>Data Engineer – AWS Native Data Platforms</p>\n<p>We are looking for a Data Engineer to join BlackRock’s Technology &amp; Operations organization, supporting the design, build, and operation of our Cloud-native data platform that powers critical technology, security, and operational use cases across the firm.</p>\n<p>This role sits within a team responsible for building reliable, secure, and observable data pipelines in a highly regulated environment. You will work closely with technology, operations, and information security partners to deliver data products that enable transparency, automation, and risk-informed decision making at scale.</p>\n<p>The ideal candidate is an engineer at heart,comfortable working end-to-end across ingestion, transformation, orchestration, and governance,who values clean design, strong documentation, and operational excellence.</p>\n<p>What You’ll Do</p>\n<ul>\n<li>Design, build, and maintain AWS-native data pipelines for batch and event-driven workloads, with a focus on reliability, scalability, and security.</li>\n<li>Develop and operate data workflows using Apache Airflow for orchestration and Python and SQL for transformation and data quality logic.</li>\n<li>Implement data transformations and models using modern analytics engineering practices (e.g., dbt-style patterns, tested transformations, incremental processing).</li>\n<li>Integrate data from a variety of enterprise sources, including cloud services, internal platforms, APIs, and security/operational telemetry.</li>\n<li>Partner with Information Security, Risk, and Operations teams to translate business and control requirements into durable data solutions.</li>\n<li>Embed data quality, lineage, and observability into pipelines using testing frameworks and monitoring standards.</li>\n<li>Operate within BlackRock’s cloud security and governance standards, including IAM, encryption, logging, and secrets management.</li>\n<li>Contribute to CI/CD pipelines, infrastructure-as-code patterns, and standardized platform tooling.</li>\n<li>Document data products, pipelines, and operating procedures to support transparency and long-term maintainability.</li>\n<li>Participate in design reviews, code reviews, and incident/post-incident analysis to continuously improve platform resilience.</li>\n</ul>\n<p>Core Technologies You’ll Work With</p>\n<ul>\n<li>AWS: S3, IAM, Glue, Lambda, Step Functions, CloudWatch, Secrets Manager, OpenSearch, and related native services</li>\n<li>Orchestration: Apache Airflow</li>\n<li>Languages: Python, SQL</li>\n<li>Data Modeling &amp; Transformation: Analytics-engineering patterns (e.g., dbt-like workflows)</li>\n<li>Data Quality &amp; Testing: Schema and data validation frameworks (e.g., Great Expectations-style approaches)</li>\n<li>Infrastructure &amp; Delivery: CI/CD, Git-based workflows, infrastructure-as-code (Terraform or equivalent)</li>\n<li>Security &amp; Governance: Encryption, access controls, audit logging, platform security baselines</li>\n</ul>\n<p>What We’re Looking For</p>\n<ul>\n<li>7 years+ relevant experience as a Data Engineer, Analytics Engineer, or similar role building production data pipelines.</li>\n<li>Strong hand-on experience with AWS-native data services in a regulated or enterprise environment.</li>\n<li>Proficiency in Python and SQL, with an emphasis on readable, testable, and maintainable code.</li>\n<li>Experience with workflow orchestration (Airflow or equivalent).</li>\n<li>Solid understanding of data modeling, incremental processing, and performance optimization.</li>\n<li>Familiarity with data quality, monitoring, and operational support for production data systems.</li>\n<li>Experience collaborating with cross-functional partners (e.g., security, operations, product, or risk teams).</li>\n<li>A disciplined approach to documentation, change management, and incident response.</li>\n</ul>\n<p>Nice to Have</p>\n<ul>\n<li>Experience supporting security, risk, or compliance data domains.</li>\n<li>Exposure to OpenSearch / Elasticsearch, metrics pipelines, or log-analytics platforms.</li>\n<li>Familiarity with cloud security controls, IAM design, and secrets management.</li>\n<li>Experience building data platforms that support executive-level reporting or regulatory oversight.</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged, and inspired, we offer a wide range of employee benefits including:</p>\n<ul>\n<li>Retirement investment and tools designed to help you in building a sound financial future.</li>\n<li>Access to education reimbursement.</li>\n<li>Comprehensive resources to support your physical health and emotional well-being.</li>\n<li>Family support programs.</li>\n<li>Flexible Time Off (FTO) so you can relax, recharge, and be there for the people you care about.</li>\n</ul>\n<p>Our hybrid work model</p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p>About BlackRock</p>\n<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>\n<p>This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_086c2470-e2b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com/","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/81uPaQe8ESRj635WgGaq2b/lead-developer---vice-president-in-edinburgh-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","Apache Airflow","Python","SQL","Data Modeling & Transformation","Data Quality & Testing","Infrastructure & Delivery","Security & Governance"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:17:57.133Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Edinburgh"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"AWS, Apache Airflow, Python, SQL, Data Modeling & Transformation, Data Quality & Testing, Infrastructure & Delivery, Security & Governance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2c452765-84f"},"title":"Site Reliability Data Engineer","description":"<p>For over 31,000 growing businesses and HR teams seeking a comprehensive, all-in-one HR suite, Workable emerges as the premier solution. We uniquely combine the world&#39;s most widely adopted Applicant Tracking System (Workable Recruiting) with a full-spectrum employee management system (Workable HR).</p>\n<p>At Workable, we empower companies to focus on what truly matters: hiring the right people and fostering their growth. While we take HR seriously, we maintain a lighthearted and collaborative culture. At Workable, you&#39;ll find smart people who have fun, learn, innovate, and help others do the same.</p>\n<p>We respect everyone, we hire the best, and make sure every experience is special.</p>\n<p>As a Site Reliability Data Engineer based in Athens, you will play a critical role in ensuring the reliability, scalability, and performance of our data infrastructure and pipelines. You will collaborate closely with engineering teams to build and operate robust cloud-based systems, driving automation and observability across our platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Build, operate, and improve ETL/ELT pipelines, Spark workloads, and data warehouse components.</li>\n<li>Develop tools and automations to simplify and harden data pipeline workflows and general operations.</li>\n<li>Design, implement, and maintain scalable, highly available cloud infrastructure and services with a focus on automation and reliability.</li>\n<li>Develop and operate observability tooling for monitoring, logging, tracing, and data-pipeline metrics (freshness, completeness, latency, error rates).</li>\n<li>Collaborate with development teams to instrument, deploy, and troubleshoot production systems across microservices on Kubernetes.</li>\n<li>Operate, deploy, and monitor data infrastructure and cloud services from development to production.</li>\n<li>Own availability, scalability, and performance of systems, focusing on data pipelines and warehousing components.</li>\n<li>Partner with peer SREs to roll out production changes and mitigate data-related and infrastructure incidents.</li>\n<li>Troubleshoot issues across data pipelines and production systems; support capacity planning and analyze system and data workflow performance.</li>\n<li>Provide data engineering expertise to engineering teams and work cross-functionally with developers and analysts on designing, releasing, and troubleshooting production systems.</li>\n<li>Own team projects and ensure timely delivery.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>BS/MS degree in Computer Science, Engineering, or equivalent practical experience</li>\n<li>2+ years of experience in site reliability engineering, data engineering, or a closely related role, including programming</li>\n<li>Experience with a major cloud provider (AWS or GCP)</li>\n<li>Hands-on experience with infrastructure-as-code or configuration management tools (Terraform or Ansible)</li>\n<li>Experience with ETL/ELT concepts and tools (Airflow or dbt)</li>\n<li>Experience with Apache Spark or similar distributed data processing frameworks</li>\n<li>Experience with cloud data warehouses (BigQuery, Redshift, or Snowflake)</li>\n<li>Proficiency in at least one programming language (Python, Go, or Scala)</li>\n<li>Excellent written English proficiency</li>\n<li>Legally authorized to work in Greece</li>\n</ul>\n<p>Preferred Qualifications</p>\n<ul>\n<li>Production experience with Kubernetes</li>\n<li>Experience with centralized monitoring and logging systems</li>\n<li>Experience with streaming systems (Kafka or Spark Streaming)</li>\n</ul>\n<p>Benefits</p>\n<p>Our employees enjoy benefits that make them more productive and contribute directly to the development of their professional skills. We want to be able to attract the best of the best and make sure they keep getting better. On top of an exciting, vibrant and intellectually challenging environment, we are offering:</p>\n<ul>\n<li>Comprehensive Health Coverage: A robust health insurance plan that includes coverage for your dependents.</li>\n<li>Competitive Compensation: An attractive salary paired with a performance-based bonus plan.</li>\n<li>Flexible Work Model: Enjoy the best of both worlds with a hybrid setup,two days working from home and three in the office.</li>\n<li>Top-Tier Tools: Apple gear and access to the latest productivity tools to help you excel.</li>\n<li>Stay Connected: A mobile data plan to keep you online wherever you are.</li>\n<li>Delicious Perks: Fresh, tasty food at the office to fuel your productivity.</li>\n<li>Relocation Bonus: To help you settle in smoothly in Athens.</li>\n</ul>\n<p>Workable is most decidedly an equal opportunity employer. We want applicants of diverse background and hire without regard to colour, gender, religion, national origin, citizenship, disability, age, sexual orientation, or any other characteristic protected by law.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2c452765-84f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Workable"},"x-apply-url":"https://apply.workable.com/j/273C8E852D","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cloud computing","Data engineering","ETL/ELT","Apache Spark","Cloud data warehouses","Kubernetes","Infrastructure-as-code","Configuration management","Observability tooling","Monitoring","Logging","Tracing","Data-pipeline metrics"],"x-skills-preferred":["Production experience with Kubernetes","Centralized monitoring and logging systems","Streaming systems (Kafka or Spark Streaming)"],"datePosted":"2026-04-24T14:14:22.101Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Athens"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud computing, Data engineering, ETL/ELT, Apache Spark, Cloud data warehouses, Kubernetes, Infrastructure-as-code, Configuration management, Observability tooling, Monitoring, Logging, Tracing, Data-pipeline metrics, Production experience with Kubernetes, Centralized monitoring and logging systems, Streaming systems (Kafka or Spark Streaming)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8bd53be2-6cf"},"title":"Senior Site Reliability Data Engineer","description":"<p>For over 31,000 growing businesses and HR teams seeking a comprehensive, all-in-one HR suite, Workable emerges as the premier solution. We uniquely combine the world’s most widely adopted Applicant Tracking System (Workable Recruiting) with a full-spectrum employee management system (Workable HR).</p>\n<p>At Workable, we empower companies to focus on what truly matters: hiring the right people and fostering their growth. While we take HR seriously, we maintain a lighthearted and collaborative culture. At Workable, you’ll find smart people who have fun, learn, innovate, and help others do the same.</p>\n<p>We respect everyone, we hire the best, and make sure every experience is special.</p>\n<p>As a Senior Site Reliability Data Engineer based in Athens, Greece, you will play a critical role in ensuring the reliability, scalability, and performance of Workable&#39;s data and cloud infrastructure. This is a high-impact position where your expertise will directly influence the operational excellence and growth of our data platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design, build, and maintain core data engineering infrastructure including ETL/ELT pipelines, Apache Spark workloads, and data warehouse systems.</li>\n</ul>\n<ul>\n<li>Ensure availability, scalability, and performance of data infrastructure and pipelines with deep operational ownership.</li>\n</ul>\n<ul>\n<li>Design, implement, and maintain scalable reliability tooling and automation to streamline deployment, monitoring, and incident response across distributed services.</li>\n</ul>\n<ul>\n<li>Operate and optimize Kubernetes-based cloud infrastructure to ensure high availability, performance, and cost-efficiency.</li>\n</ul>\n<ul>\n<li>Partner cross-functionally with developers and analysts to design, release, and troubleshoot production systems; provide data engineering expertise.</li>\n</ul>\n<ul>\n<li>Lead cross-functional projects with development teams to improve system reliability, automate capacity planning, and enforce SRE best practices.</li>\n</ul>\n<ul>\n<li>Develop and maintain centralized observability, including logging, metrics, tracing, and alerting pipelines; continuously improve incident detection and response workflows.</li>\n</ul>\n<ul>\n<li>Own observability for data pipelines (freshness, completeness, latency, error rates) and ensure SLOs are met.</li>\n</ul>\n<ul>\n<li>Plan platform growth and manage capacity for the data platform and related infrastructure.</li>\n</ul>\n<ul>\n<li>Operate, deploy, and monitor data platform components and broader cloud services from development through production.</li>\n</ul>\n<ul>\n<li>Develop tools and automation to simplify data operations and make deployments more robust and self-service.</li>\n</ul>\n<ul>\n<li>Collaborate with peer SREs to roll out production changes and mitigate data/infrastructure incidents.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8bd53be2-6cf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Workable"},"x-apply-url":"https://apply.workable.com/j/22CEAF6027","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","ETL/ELT pipelines","cloud data warehouses","major cloud provider","infrastructure automation tools","centralized logging","monitoring","observability frameworks"],"x-skills-preferred":["production experience with Kubernetes","streaming systems","data quality","data observability tooling","relational and NoSQL databases","proficiency in programming languages","deep knowledge of Linux systems"],"datePosted":"2026-04-24T14:13:36.167Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Athens"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, ETL/ELT pipelines, cloud data warehouses, major cloud provider, infrastructure automation tools, centralized logging, monitoring, observability frameworks, production experience with Kubernetes, streaming systems, data quality, data observability tooling, relational and NoSQL databases, proficiency in programming languages, deep knowledge of Linux systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_af746432-e09"},"title":"VP, Senior Full-Stack Engineer (Java & Angular)","description":"<p>Are you interested in building innovative technology that shapes the financial markets? Do you like working at the speed of a startup, and tackling some of the world&#39;s most interesting challenges? At BlackRock, we are looking for Software Engineers who like to innovate and solve complex problems.</p>\n<p>We recognize that strength comes from diversity, and will embrace your unique skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual.</p>\n<p>Aladdin by BlackRock manages over $30 trillion (USD) in assets, and its engineers have an extraordinary responsibility to our clients all over the world. Our technology empowers millions of investors to achieve their investments objectives, save for retirement, pay for college, buy a home, and improve people&#39;s financial well-being.</p>\n<p>This role will be responsible for all aspects of software development, testing and ensuring compatibility with enterprise and solutions architecture by harnessing modern development technologies.</p>\n<p>The position is for a Vice President within the Investment and Trading engineering team within Aladdin Engineering and is responsible for delivering software solutions leveraged by Portfolio Managers, Traders, Researchers, Risk Managers, Compliance Officers and Investment Operations.</p>\n<p>We are passionate about building quality software and scalable technology to meet the needs of tomorrow. We have strong Java expertise and work with a range of technologies such as Azure cloud, Kafka, Cassandra, Docker, Kubernetes, Angular and many others. We are committed to open source, and contributing back to the community. We write testable software every day, with a focus on agile innovation.</p>\n<p>The team is looking for an ambitious hands-on senior software engineer to work on an exciting strategic product to expand our Aladdin Portfolio Management capabilities. Working with a global team and be a part of an outstanding group of engineers setting, evolving the technology direction of our upcoming suit of applications for Portfolio Management. Passionate about multiple aspects of enterprise software development – Performance, Scale, Resilience, Usability and Maintainability. As a key member of our engineering team, you will be encouraged and empowered to bring your ideas forward to help shape the technical solutions. Making you become a strong team player in our distributed and diverse global team. You also have opportunities to present your innovative ideas to leaders across the firm.</p>\n<p>Responsibilities include:</p>\n<ul>\n<li>Develop and maintain institutional grade investment functionalities used by portfolio managers</li>\n</ul>\n<ul>\n<li>Help design and build the next generation of world-class investment platform</li>\n</ul>\n<ul>\n<li>Contribute to an agile development team working with designers, product managers, users</li>\n</ul>\n<ul>\n<li>Quality-first mind-set - apply quality software engineering practices through all phases of development and into production</li>\n</ul>\n<ul>\n<li>Collaborate with team members in a multi-office, multi-country, global team environment.</li>\n</ul>\n<ul>\n<li>Ensure resilience, stability, and high-performance of software delivery through quality code reviews, unit, regression and user acceptance testing, dev ops and level two production support.</li>\n</ul>\n<ul>\n<li>Nurture the talent around you and lead by example.</li>\n</ul>\n<ul>\n<li>Being in a senior position, people would look up to you, and you would be responsible for driving an inclusive and competitive culture in the team.</li>\n</ul>\n<p>Competencies include:</p>\n<ul>\n<li>Passionate about technology, user experience, with personal ownership for the work you do</li>\n</ul>\n<ul>\n<li>Curious and eager to learn new business domain and tech skills, and willing to challenge the status quo</li>\n</ul>\n<ul>\n<li>Know how to leverage AI tools to increase your productivity</li>\n</ul>\n<ul>\n<li>Willing to embrace work outside of your comfort zone, and open to guidance from others</li>\n</ul>\n<ul>\n<li>Data and quality focused, with an eye for the details that make great solutions</li>\n</ul>\n<ul>\n<li>You are always willing to learn from any issues/incidents, try to continuously improve</li>\n</ul>\n<ul>\n<li>Experienced working in either Portfolio Management or Trading segments</li>\n</ul>\n<ul>\n<li>Knowledgeable in Trading, Equity, FI, OTC, Exchange Traded Derivatives, Prime Brokerage, Compliance, and Portfolio Management processes.</li>\n</ul>\n<p>Experience and Qualifications:</p>\n<ul>\n<li>Designed and engineered enterprise financial solutions in production with a strong foundation in Java and related technologies</li>\n</ul>\n<ul>\n<li>Experience with distributed caching &amp; computing, real-time, and highly scalable technologies (such as Apache Ignite, Kafka, Redis) and modern front-end web development (such as Micro-frontends, Web-streaming, Angular/React, Type Script).</li>\n</ul>\n<ul>\n<li>Passionate about creating the best user experience</li>\n</ul>\n<ul>\n<li>B.E. or M.S. degree in Computer Science, Engineering or a related discipline</li>\n</ul>\n<ul>\n<li>Excellent analytical, problem-solving and communication skills</li>\n</ul>\n<ul>\n<li>An ability to apply modern tech solutions to solve investment and trading problems</li>\n</ul>\n<ul>\n<li>A track record of forging strong relationships and building trusted partnerships through open dialogue and continuous delivery</li>\n</ul>\n<ul>\n<li>Experience working with UX designers, product managers, technical/enterprise leads, and architects across the SDLC lifecycle; understanding of systems requirements, design, development, testing, deployment and documentation</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Certification (e.g., CFA) or passion in investment/portfolio management/trading processes</li>\n</ul>\n<ul>\n<li>Experience with MSSQL or Apache Cassandra Database</li>\n</ul>\n<ul>\n<li>Experience with Cloud platforms such as Microsoft Azure</li>\n</ul>\n<ul>\n<li>Experience with AI models and tools</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_af746432-e09","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/65fGJ5np3dAFaJEGL4T3Py/vp%2C-senior-full-stack-engineer-(java-%26amp%3B-angular)-in-london-at-blackrock","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Angular","Azure cloud","Kafka","Cassandra","Docker","Kubernetes","Micro-frontends","Web-streaming","Type Script","Apache Ignite","Redis","UI/UX","APIs","gRPC","Proto-buffs","Spring","Node.JS"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:12:44.479Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Java, Angular, Azure cloud, Kafka, Cassandra, Docker, Kubernetes, Micro-frontends, Web-streaming, Type Script, Apache Ignite, Redis, UI/UX, APIs, gRPC, Proto-buffs, Spring, Node.JS"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7caba9c0-9a1"},"title":"Data & Cloud Engineer (H/F)","description":"<p>We are looking for a Data &amp; Cloud Engineer to join our team. As a Data &amp; Cloud Engineer, you will be responsible for developing and implementing technical solutions to transform data into actionable insights. You will work closely with our clients to understand their data needs and develop tailored solutions to meet those needs.</p>\n<p>Our team uses a range of technologies including Python, SQL, Docker, Kubernetes, and Apache Airflow. We are looking for someone with strong technical skills and experience working with cloud-based technologies.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop and implement technical solutions to transform data into actionable insights</li>\n<li>Work closely with clients to understand their data needs and develop tailored solutions to meet those needs</li>\n<li>Collaborate with cross-functional teams to ensure seamless delivery of projects</li>\n<li>Stay up-to-date with industry trends and emerging technologies</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>2 years of experience in data engineering</li>\n<li>Strong technical skills in Python, SQL, Docker, Kubernetes, and Apache Airflow</li>\n<li>Experience working with cloud-based technologies</li>\n<li>Strong communication and collaboration skills</li>\n<li>Ability to work in a fast-paced environment</li>\n</ul>\n<p>Preferred qualifications:</p>\n<ul>\n<li>Experience working with big data technologies such as Hadoop and Spark</li>\n<li>Knowledge of data warehousing and business intelligence tools</li>\n<li>Experience with data visualization tools such as Tableau and Power BI</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a leading data company</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Professional development opportunities</li>\n<li>Flexible working hours and remote work options</li>\n</ul>\n<p>If you are a motivated and experienced data engineer looking for a new challenge, please submit your application. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7caba9c0-9a1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"fifty-five","sameAs":"https://www.fifty-five.com/","logo":"https://logos.yubhub.co/fifty-five.com.png"},"x-apply-url":"https://jobs.workable.com/view/c6JDDgc6oq5eBJSqCegVw5/hybrid-data-%26-cloud-engineer-(h%2Ff)-in-paris-at-fifty-five","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Docker","Kubernetes","Apache Airflow","Cloud-based technologies"],"x-skills-preferred":["Big data technologies","Data warehousing and business intelligence tools","Data visualization tools"],"datePosted":"2026-04-24T14:10:35.818Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Docker, Kubernetes, Apache Airflow, Cloud-based technologies, Big data technologies, Data warehousing and business intelligence tools, Data visualization tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b8870690-5d6"},"title":"Sr. AI Engineer - Player Intelligence and Growth, Data & Insights (D&I)","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. The Data &amp; Insights (D&amp;I) team transforms data into actionable insights that power EA. We are hiring an AI Engineer to join the Player Intelligence &amp; Growth team within Data and Insights (D&amp;I), reporting to a Sr Manager. This team partners with all of EA&#39;s game studios to offer data science &amp; AI products and solutions. For this AI Engineer role we are looking for applied and practical AI/ML expertise with a focus on Gen AI Solutions.</p>\n<p>As a Sr. AI Engineer, you will help scale our internal AI-powered insights tool by partnering with analysts, product teams, marketing, and titles like EA SPORTS FC™, Apex Legends™, The Sims™, and Madden NFL. You will work directly with game teams/partners (internal clients) to understand their offerings/domain and create AI products and solutions to solve for their use cases. You will develop plans to generalize AI products across titles and review AI tools used within the team, providing guidance and being accountable for the success and the adoption of the project/product.</p>\n<p>You will implement feature enhancements for our AI-powered analytics tool using GCP services, LLMs, and our internal tech stack. You will engage with other Data Scientists, Data Analysts sharing best practices and help consult on cross-projects. You will design, improve and work with our data pipeline that transfers and processes petabytes of data using tools, such as: AWS, S3, Kubernetes, GCP, Python, Apache Kafka, Ruby &amp; Hive.</p>\n<p>We are looking for a hands-on engineer with practical experience building AI/ML-driven systems, evaluating emerging tools, and delivering impactful, reusable solutions across multiple domains. You will have a graduate degree in Computer Science, Engineering, AI/ML, or a related quantitative field and 4+ years of experience building AI, ML, or data-driven systems in production environments.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b8870690-5d6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Sr-AI-Engineer-Player-Intelligence-and-Growth-Data-Insights-D-I/211264","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$122,300 - $170,700 CAD","x-skills-required":["Python","SQL","GCP","LLMs","embeddings","retrieval systems","AI agents","CI/CD","microservices","cloud-native deployment patterns"],"x-skills-preferred":["AWS","S3","Kubernetes","Apache Kafka","Ruby & Hive"],"datePosted":"2026-04-24T13:16:11.540Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, GCP, LLMs, embeddings, retrieval systems, AI agents, CI/CD, microservices, cloud-native deployment patterns, AWS, S3, Kubernetes, Apache Kafka, Ruby & Hive","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":122300,"maxValue":170700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_88313c8a-9fa"},"title":"Software Engineer Full Stack","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a Software Engineer II - Full Stack for Gameplay Services, you will work on providing systems and tooling enabling game teams to leverage our matchmaking system, integrated in EA&#39;s biggest titles and enjoyed by millions of players worldwide.</p>\n<p>Our platform powers online features for EA&#39;s games, serving millions of users each day. We live, breathe, and dream about how we can make every player&#39;s multiplayer experience memorable. We develop services and SDKs in collaboration with EA&#39;s game studios for matchmaking, stats and leaderboards, achievements, game replays, VOIP, and game networking.</p>\n<p>Your focus will be on providing systems and tooling enabling game teams to leverage our matchmaking system. You will collaborate closely with your team and partner studios to maintain, enhance, and extend our core services.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design brand new services covering all aspects from storage to application logic to management console</li>\n<li>Enhance and add features to existing systems</li>\n<li>Communicate with engineers from across the company to deliver the next generation of online features for both established and not-yet-released games</li>\n<li>Be a part of the full product cycle for our products, from design and testing to deployment and supporting our LIVE environments and our game team customers</li>\n<li>Maintain a suite of automated tests that validate the correctness of backend services</li>\n<li>Advocate for best practices within the engineering team</li>\n<li>Work with product managers to improve new features to support EA&#39;s business</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor/Master&#39;s degree in Computer Science, Computer Engineering or related field</li>\n<li>2+ years professional programming experience</li>\n<li>Experience with various programming languages and frameworks (React, Typescript, NodeJS, Golang)</li>\n<li>Deep understanding of HTML, CSS and DOM</li>\n<li>Experience with cloud computing products such as AWS EC2, ElastiCache, and ELB</li>\n<li>Experience with technologies such as Docker, Kubernetes, and Terraform</li>\n<li>Experience with relational or NoSQL database</li>\n<li>Experience with all phases of product development lifecycle, including requirement definition, development, test, and product release</li>\n<li>Adept at solving complex technical problems</li>\n<li>Strong sense of collaboration</li>\n<li>Excellent written and verbal communication skills</li>\n<li>Motivated self-starter and able to operate with autonomy</li>\n</ul>\n<p>Bonus Qualifications:</p>\n<ul>\n<li>Experience with Jenkins and Groovy</li>\n<li>Experience with Ansible</li>\n<li>Knowledge of Google gRPC and protobuf</li>\n<li>Experience with high traffic services and highly scalable, distributed systems</li>\n<li>Knowledge of scalable data storage and processing technologies such as Cassandra, Apache Spark, and AWS S3</li>\n<li>Experience with stress testing plus performance tuning and optimization</li>\n<li>Experience working within the games industries</li>\n</ul>\n<p>We thought you might also want to know</p>\n<p>The benefits and perks of working for EA</p>\n<p>We&#39;re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>\n<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_88313c8a-9fa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II-Full-Stack/211085","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["React","Typescript","NodeJS","Golang","HTML","CSS","DOM","AWS EC2","ElastiCache","ELB","Docker","Kubernetes","Terraform","relational database","NoSQL database","product development lifecycle"],"x-skills-preferred":["Jenkins","Groovy","Ansible","Google gRPC","protobuf","high traffic services","distributed systems","scalable data storage","Apache Spark","AWS S3","stress testing","performance tuning","games industries"],"datePosted":"2026-04-24T13:15:39.091Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, Typescript, NodeJS, Golang, HTML, CSS, DOM, AWS EC2, ElastiCache, ELB, Docker, Kubernetes, Terraform, relational database, NoSQL database, product development lifecycle, Jenkins, Groovy, Ansible, Google gRPC, protobuf, high traffic services, distributed systems, scalable data storage, Apache Spark, AWS S3, stress testing, performance tuning, games industries"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_88030e1d-d2f"},"title":"Senior Software Engineer","description":"<p>As a Senior Software Engineer at MHP, you will develop full-stack applications using React and TypeScript on the frontend and Node.js (TypeScript) on the backend. You will also define, deploy, and manage infrastructure using AWS CDK (TypeScript) and design and maintain microservices and event-driven systems using Apache Kafka, SNS, SQS, and EventBridge.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Developing full-stack applications using React and TypeScript on the frontend and Node.js (TypeScript) on the backend</li>\n<li>Defining, deploying, and managing infrastructure using AWS CDK (TypeScript)</li>\n<li>Designing and maintaining microservices and event-driven systems using Apache Kafka, SNS, SQS, and EventBridge</li>\n<li>Ensuring system security, scalability, and observability using tools like IAM, CloudWatch, and X-Ray</li>\n<li>Writing clean, maintainable, and well-documented code</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Senior-level experience working with NodeJS, additional Java experience is an advantage</li>\n<li>Senior-level experience working with frontend technologies such as React and Typescript</li>\n<li>Mid-senior level experience working with AWS Services (S3, Lambdas, API Gateway, Lambda, ECS), Authorization with PPN/Entra-ID (Oauth, OIDC), and Infrastructure as a Code (AWS CDK with Typescript)</li>\n<li>Experience with REST API development</li>\n<li>Hands-on knowledge of responsive UI development and frontend testing</li>\n<li>Hands-on knowledge with CI/CD pipelines with GitLab and test automation</li>\n<li>Problem-solving mindset with the ability to optimize performance and cost management</li>\n<li>Strong communication skills and experience working in cross-functional Agile teams</li>\n<li>Ability to write clean, maintainable, and well-documented code</li>\n<li>Experience in enterprise applications, preferably in the Automotive domain, is a plus</li>\n<li>Bachelor&#39;s Degree in Computer Science or a related field is an advantage</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_88030e1d-d2f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"http://www.mhp.com/","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18149","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["NodeJS","React","TypeScript","AWS CDK","Apache Kafka","SNS","SQS","EventBridge","IAM","CloudWatch","X-Ray"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:14:26.208Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Consulting","skills":"NodeJS, React, TypeScript, AWS CDK, Apache Kafka, SNS, SQS, EventBridge, IAM, CloudWatch, X-Ray"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aad66c6a-ad1"},"title":"Lead Data Scientist - Battlefield, Data and Insights (D&I)","description":"<p>We&#39;re hiring a Lead Data Scientist to join our Data &amp; Insights (D&amp;I) Data Science team. The Data Science team partners with EA studios to build scalable AI/ML solutions that enhance player experience, game design, and live service performance.</p>\n<p>You will bring expertise in the area of AI, ML, and engineering. You will also lead efforts related to life cycle management, progression, in-game economies, and player experience, specifically, within the Battlefield franchise.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Working directly with Battlefield game team/partners to understand their offerings/domain and create data science products and solutions to solve for their use cases.</li>\n</ul>\n<ul>\n<li>Applying problem-driven, AI/ML approaches to improve player experience, engagement, retention, and monetization systems.</li>\n</ul>\n<ul>\n<li>Developing plans to generalize products across the franchise with our engineering partners.</li>\n</ul>\n<ul>\n<li>Establishing rigorous experimental design standards (A/B testing, causal inference, system experimentation) to produce actionable insights.</li>\n</ul>\n<ul>\n<li>Collaborating with engineering partners to productionize models within live environments and gameplay systems.</li>\n</ul>\n<ul>\n<li>Designing and enhancing data pipelines that process petabyte-scale telemetry data using technologies such as AWS, S3, Kubernetes, GCP, Python, Apache Kafka, and Hive.</li>\n</ul>\n<ul>\n<li>Developing algorithms and statistical models for forecasting, player state prediction, churn analysis, progression balancing, and economic system tuning.</li>\n</ul>\n<ul>\n<li>Communicating complex analytical concepts to technical and non-technical partners, influencing strategic decisions.</li>\n</ul>\n<ul>\n<li>Mentoring other data scientists and contributing to shared best practices across the D&amp;I organization.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aad66c6a-ad1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Lead-Data-Scientist-Battlefield-Data-and-Insights-D-I/213127","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$141,400 - $204,400 CAD","x-skills-required":["AI","ML","engineering","data science","AWS","S3","Kubernetes","GCP","Python","Apache Kafka","Hive"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:13:26.748Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI, ML, engineering, data science, AWS, S3, Kubernetes, GCP, Python, Apache Kafka, Hive","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":141400,"maxValue":204400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7e078ceb-e9a"},"title":"Data Engineer","description":"<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have an exciting opportunity for you to join our expanding area of Prognostics.</p>\n<p>Are you enthusiastic to mine raw data and realize its hidden value by building amazing, connected data solutions that benefit our customers? Would you love to accelerate our efforts in implementing advanced physics and ML Models in production?</p>\n<p>The Data Engineer role resides within the Ford’s Electric Vehicle organization. In this role, you will work on building scalable and robust data pipelines to process large volumes of connected vehicle data to support the Ford vehicle prognostic initiatives.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop exceptional analytical data products using both streaming and batch ingestion patterns on Google Cloud Platform with solid data warehouse principles.</li>\n<li>Build data pipelines to monitoring quality of data and performance of analytical models.</li>\n<li>Maintain the infrastructure of the data platform using terraform and continuously develop, evaluate, and deliver code using CI/CD.</li>\n<li>Collaborate with data analytics stakeholders to streamline the data acquisition, processing, and presentation process.</li>\n<li>Implement an enterprise data governance model and actively promote the concept of data - protection, sharing, reuse, quality, and standards.</li>\n<li>Enhance and maintain the DevOps capabilities of the data platform.</li>\n<li>Continuously optimize and enhance existing data solutions (pipelines, products, infrastructure) for best performance, high security, low vulnerability, low costs, and high reliability.</li>\n<li>Work in an agile product team to deliver code frequently using Test Driven Development (TDD), continuous integration and continuous deployment (CI/CD).</li>\n<li>Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle.</li>\n<li>Perform any necessary data mapping, data lineage activities and document information flows.</li>\n<li>Monitor the production pipelines and provide production support by addressing production issues as per SLAs.</li>\n<li>Provide analysis of connected vehicle data to support new product developments and production vehicle improvements.</li>\n<li>Provide visibility to data quality/vehicle/feature issues and work with the business owners to fix the issues.</li>\n<li>Demonstrate technical knowledge and communication skills with the ability to advocate for well-designed solutions.</li>\n<li>Continuously enhance your domain knowledge of connected vehicle data, connected services and algorithms/models developed by data scientists within Ford.</li>\n<li>Stay current on the latest data engineering practices and contribute to the technical direction of the company while keeping a customer-centric approach.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Master’s degree or foreign equivalent degree in Computer Science, Software Engineering, Information Systems, Data Engineering, or a related field, and 4 years of experience OR equivalent combination of education and experience (6+ years with Bachelor&#39;s Degree).</li>\n<li>4 years of professional experience in:</li>\n<li>Data engineering, data product development and software product launches</li>\n<li>At least three of the following languages: Java, Python, Spark, Scala, SQL</li>\n<li>3 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using:</li>\n<li>Data warehouses like Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery.</li>\n<li>Workflow orchestration tools like Airflow.</li>\n<li>Relational Database Management System like MySQL, PostgreSQL, and SQL Server.</li>\n<li>Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub</li>\n<li>Microservices architecture to deliver large-scale real-time data processing application.</li>\n<li>REST APIs for compute, storage, operations, and security.</li>\n<li>DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker.</li>\n<li>Project management tools like Atlassian JIRA.</li>\n</ul>\n<p><strong>Even better if you have...</strong></p>\n<ul>\n<li>Ph.D. or foreign equivalent degree in Computer Science, Software Engineering, Information System, Data Engineering, or a related field.</li>\n<li>2 years of experience with ML Model Development and/or MLOps.</li>\n<li>Committed code to improve open-source data/software engineering projects</li>\n<li>Experience architecting cloud infrastructure and handling application migrations/upgrades.</li>\n<li>GCP Professional Certifications.</li>\n<li>Demonstrated passion to mine raw data and realize its hidden value.</li>\n<li>Passion to experiment/implement state of the art data engineering methods/techniques.</li>\n<li>Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment.</li>\n<li>Experience implementing methods for automation of all parts of the pipeline to minimize labor in development and production.</li>\n<li>Analytics skills to profile data, troubleshoot data pipeline/product issues.</li>\n<li>Ability to simplify, clearly communicate complex data/software ideas/problems and work with cross-functional teams and all levels of management independently.</li>\n</ul>\n<p>Experience Level: mid</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7e078ceb-e9a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Company","sameAs":"https://www.ford.com/","logo":"https://logos.yubhub.co/ford.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/55567","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":"This position is a range of salary grades 6-8.","x-skills-required":["Java","Python","Spark","Scala","SQL","Amazon Redshift","Microsoft Azure Synapse Analytics","Google BigQuery","Airflow","MySQL","PostgreSQL","SQL Server","Apache Kafka","GCP Pub/Sub","Microservices","REST APIs","Tekton","GitHub Actions","Git","GitHub","Terraform","Docker","Atlassian JIRA"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:24:19.099Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Java, Python, Spark, Scala, SQL, Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery, Airflow, MySQL, PostgreSQL, SQL Server, Apache Kafka, GCP Pub/Sub, Microservices, REST APIs, Tekton, GitHub Actions, Git, GitHub, Terraform, Docker, Atlassian JIRA"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eb99c035-971"},"title":"Manager, Data Engineering","description":"<p>We&#39;re looking for a seasoned Data Engineering Manager to lead our team in designing, developing, and maintaining data pipelines that support our Data Hub strategy. As a key member of our Global Data Insight &amp; Analytics team, you&#39;ll be responsible for building and maintaining data assets and services that empower Artificial Intelligence, Data Science, and Software Engineering.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead a high-performing team of Portfolio Data Engineers, fostering a culture of collaboration, innovation, and continuous improvement.</li>\n<li>Strategically prioritize and manage team workloads, ensuring effective task allocation and resource capacity to support team goals.</li>\n<li>Provide expert technical guidance and mentorship, ensuring adherence to best practices, coding standards, and architectural guidelines.</li>\n<li>Act as the Chief Data Technical Anchor for the PLMA domain, resolving critical incidents through Root Cause Analysis (RCA) and implementing permanent, resilient architectural fixes.</li>\n<li>Oversee the design, development, maintenance, scalability, reliability, and performance of data platform pipelines, aligning them with business needs and strategic objectives.</li>\n<li>Contribute to the long-term strategic direction of the Data Platform by proactively identifying opportunities for best practice adoption and standardization.</li>\n<li>Champion data quality, governance, and security standards, ensuring compliance and safeguarding sensitive data assets.</li>\n<li>Enhance efficiency and reduce redundancy by consolidating common tasks across teams.</li>\n<li>Effectively communicate decisions to stakeholders, building strong relationships and ensuring alignment on data initiatives.</li>\n<li>Maintain awareness of industry trends and emerging technologies to inform technical decisions.</li>\n<li>Lead the implementation of customer requests into data assets, ensuring optimized design and code development.</li>\n<li>Guide the team in delivering scalable, robust data solutions and contribute hands-on to critical projects, including design and code reviews.</li>\n<li>Lead technical decisions that drive data innovation and resilience.</li>\n<li>Demonstrate full stack cloud data engineering expertise, covering automation, versioning, ingestion, integration, transformation, optimization, and data modeling.</li>\n<li>Engage in agile planning, including scope, work breakdown structure, as well as roadblock resolution.</li>\n<li>Design solutions for cost and consumption optimization, scalability, and performance.</li>\n<li>Collaborate with Data Architecture and stakeholders on solution design, data consolidation, retention, purpose of use, compliance, and audit requirements.</li>\n<li>Drive engineering excellence by establishing and monitoring SWE-centric quality metrics (including DORA metrics and P99 latency targets).</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Technology, Information Systems, Data Analytics, or a related field.</li>\n<li>8+ years of experience in complex data environments, demonstrating increased responsibilities and achievements with:</li>\n</ul>\n<p>+ Expertise in programming languages such as Python or Scala, and strong SQL skills. \t+ Experience with ETL/ELT processes, data warehousing, and data modeling. \t+ Experience with CI/CD pipelines, Docker, Git/Gerrit, and experience designing resilient deployment strategies and sophisticated release management. \t+ Familiarity of data governance, privacy, quality, and monitoring.</p>\n<ul>\n<li>Proven experience in implementing sophisticated testing strategies, driving quality tool adoption, establishing comprehensive code review processes, and setting observability standards with advanced monitoring and proactive alerting.</li>\n<li>5+ years of experience within the automotive industry or related product development environments and product lifecycle management.</li>\n<li>5+ years of experience in leading software or data engineering teams, with a focus on team development and project success.</li>\n<li>5+ years of experience in Big Data environments or expertise with Big Data tools, including:</li>\n</ul>\n<p>+ Data processing frameworks and data modeling. \t+ In-depth knowledge and practical experience with Google Cloud Platform services. \t+ Proven experience in monitoring and optimizing costs and compute resources in hyperscaler platforms.</p>\n<ul>\n<li>Significant experience leveraging Generative AI and LLMs to optimize data engineering workflows (e.g., automated code generation, documentation, or metadata management).</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s degree in Computer Science, Engineering, or a related field.</li>\n<li>Expertise in GCP based data engineering services like BQ, Dataflow, Airflow, Dataform, Datastream, Apache Beam, Cloud Run, Cloud Functions</li>\n<li>Familiarity with automotive Product Development processes, including program planning, design validation, and cross-functional collaboration across engineering, manufacturing, and supplier teams to deliver data-driven insights at each lifecycle stage</li>\n<li>Experience in managing and scaling serverless applications and clusters, focusing on resource optimization and robust monitoring and logging strategies.</li>\n<li>Proficiency in unstructured data ingestion, including experience with data modeling and preparation techniques to support AI and machine learning workloads.</li>\n<li>Experience with AI architecture and AI enabling tech (graph database, vector database, etc)</li>\n<li>Familiarity with data visualization tools (e.g., Power BI, Tableau).</li>\n<li>Working knowledge of ontology, semantic modeling, and related technologies</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eb99c035-971","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Company","sameAs":"https://corporate.ford.com/","logo":"https://logos.yubhub.co/corporate.ford.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62339","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["Python","Scala","SQL","ETL/ELT processes","data warehousing","data modeling","CI/CD pipelines","Docker","Git/Gerrit","data governance","privacy","quality","monitoring"],"x-skills-preferred":["Generative AI","LLMs","GCP based data engineering services","BQ","Dataflow","Airflow","Dataform","Datastream","Apache Beam","Cloud Run","Cloud Functions","automotive Product Development processes","program planning","design validation","cross-functional collaboration","data-driven insights","unstructured data ingestion","preparation techniques","AI architecture","AI enabling tech","graph database","vector database","data visualization tools","ontology","semantic modeling"],"datePosted":"2026-04-24T12:19:58.496Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Python, Scala, SQL, ETL/ELT processes, data warehousing, data modeling, CI/CD pipelines, Docker, Git/Gerrit, data governance, privacy, quality, monitoring, Generative AI, LLMs, GCP based data engineering services, BQ, Dataflow, Airflow, Dataform, Datastream, Apache Beam, Cloud Run, Cloud Functions, automotive Product Development processes, program planning, design validation, cross-functional collaboration, data-driven insights, unstructured data ingestion, preparation techniques, AI architecture, AI enabling tech, graph database, vector database, data visualization tools, ontology, semantic modeling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1c920de1-7f9"},"title":"Principal Software Engineer","description":"<p>Join Microsoft AI&#39;s Copilot Discover Engineering Team as a Principal Software Engineer, serving as a senior technical architect to play a central role in the technical direction and long-range architecture of Copilot Discover.</p>\n<p>This is a role emphasizing true end-to-end responsibility: setting the architectural vision, shaping the platform for AI-forward discovery experiences, and steering the evolution of product experiences that sit at the heart of how users engage with the intersection of knowledge, content, and personalization on surfaces on which Copilot shows up.</p>\n<p>You will design and drive the systems that power the Copilot Discover feed at scale. You&#39;ll work on foundational platforms that ingest, enrich, rank, personalize, and serve content across web, mobile, and partner surfaces and lead architectural strategy for how we unify signals, models, and data into coherent, trustworthy experiences; modernize our ranking and personalization stack; and build the AI-forward infrastructure that makes Copilot Discover feel intelligent, anticipatory, and personalized for every user.</p>\n<p>The key is an end-to-end focus on outcomes, across a broad technical space. You&#39;ll be expected to influence platform direction across multiple teams and adjacent organizations. You will ensure that the MSN and Copilot Discover systems are robust, scalable, privacy-respecting, and engineered for long-term adaptation.</p>\n<p>High product sense is a success factor – how you drive product and architectural convergence across today&#39;s fragmented surfaces, reduce complexity, and shape a consistent platform model is key to the success of the product and this role.</p>\n<p>Copilot Discover sits at the intersection of content, signals, and user intent. Our ambition is to make it a durable, strategic layer that powers intelligent, personalized, and trusted discovery experiences across a broad array of surfaces where Microsoft engages consumers in their journeys.</p>\n<p>If you are passionate about building high-scale, AI-driven systems that combine solid architectural rigor with meaningful user value, this is the role for you.</p>\n<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<p>Own the technical direction for Copilot Discover platforms, setting end-to-end architectural strategy.</p>\n<p>Partner with product, design, data science, and engineering leaders to translate business and user needs into executable architectural plans, well-documented designs, and multi-year roadmaps.</p>\n<p>Set and govern architectural decisions across multiple services and teams, ensuring systems are scalable, secure, reliable, cost-efficient, and grounded in data, telemetry and operational excellence.</p>\n<p>Raise the technical bar across the organization by establishing flasifible principles, reviewing critical designs, and helping to develop technical leaders within the team.</p>\n<p>Establish and evolve quality and reliability standards, including test strategies, CI/CD practices, monitoring, alerting, and live-site health.</p>\n<p>Shape the adoption of AI/ML techniques for content understanding, personalization, summarization, and safety, in close collaboration with MAI and partner teams.</p>\n<p>Serve as a cross-org technical leader, aligning MSN architecture with Bing, Copilot, Ads, Privacy, Trust, and other Microsoft platforms.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 15+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Experience in ML/AI systems, especially in content understanding, ranking, or personalization.</p>\n<p>Proven experience designing and operating large-scale distributed systems, including data pipelines, microservices, APIs, and storage systems.</p>\n<p>Experience with content platforms, personalization systems, or consumer-facing services at scale.</p>\n<p>Experience with technologies such as Apache Spark, Kafka, columnar storage, data modeling, and schema evolution.</p>\n<p>Demonstrated success as a technical lead or architect, influencing across teams without direct authority.</p>\n<p>Solid understanding of system architecture, performance tuning, telemetry design, and operational excellence.</p>\n<p>Experience in ML/AI systems, especially in content understanding, ranking, or personalization.</p>\n<p>Excellent analytical and communication skills, with the ability to clearly articulate complex technical concepts.</p>\n<p>Solid cross-organizational collaboration skills and the ability to influence senior stakeholders.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1c920de1-7f9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-52/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$163,000 - $296,400 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Spark","Kafka","columnar storage","data modeling","schema evolution"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:19:50.797Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Spark, Kafka, columnar storage, data modeling, schema evolution","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":163000,"maxValue":296400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_68a62835-66b"},"title":"Senior DevOps Engineer","description":"<p>We are seeking a highly skilled and self-motivated Senior Embedded DevOps Engineer to support our engineering teams. This role will focus on driving changes and ensuring adherence to company-established standards for data infrastructure and CI/CD pipelines.</p>\n<p>The ideal candidate will have strong experience working with AWS and/or GCP, cloud-based data streaming and processing services, containerized application deployments, infrastructure automation, and Site Reliability Engineering (SRE) best practices for performance and cost optimization.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Drive initiatives to implement and enforce best practices for data streaming, processing, analytics and monitoring infrastructure.</li>\n<li>Deploy and manage services on Kubernetes-based platforms such as Amazon EKS and Google Kubernetes Engine (GKE).</li>\n<li>Provision and manage cloud infrastructure using Terraform, ensuring best practices in security, scalability, and cost-efficiency.</li>\n<li>Maintain and optimize CI/CD pipelines using Jenkins, ArgoCD, and GitHub Enterprise Actions to support automated deployments and testing.</li>\n<li>Work with cloud-native data services such as AWS Kinesis, AWS Glue, Google Dataflow, and Google Pub/Sub, BigQuery, BigTable</li>\n<li>Familiarity with workflow orchestration services such as Apache Airflow and Google Cloud Composer.</li>\n<li>Develop and maintain automation scripts and tooling using Python to support DevOps processes.</li>\n<li>Monitor system performance, troubleshoot issues, and implement proactive solutions to enhance reliability and efficiency.</li>\n<li>Implement SRE practices to improve service reliability, scalability, and cost-effectiveness.</li>\n<li>Analyze and optimize cloud costs, identifying areas for improvement and implementing cost-saving strategies.</li>\n<li>Ensure compliance with security policies and best practices in cloud environments.</li>\n<li>Drive adoption of company standards and influence data teams to align with best DevOps and SRE practices.</li>\n<li>Collaborate with cross-functional teams to improve development workflows and infrastructure.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>7+ years of experience in a DevOps, Site Reliability Engineering, or Cloud Infrastructure role.</li>\n<li>Strong experience with AWS and GCP data services, including Kinesis, Glue, Pub/Sub, and Dataflow.</li>\n<li>Proficiency in deploying and managing workloads on Kubernetes (EKS/GKE) in production environments.</li>\n<li>Hands-on experience with Infrastructure-as-Code (IaC) using Terraform.</li>\n<li>Expertise in CI/CD pipeline management using Jenkins, ArgoCD, and GitHub Enterprise Actions.</li>\n<li>Programming skills in Python for automation and scripting.</li>\n<li>Experience with observability and monitoring tools (e.g., Prometheus, Grafana, Datadog, or CloudWatch).</li>\n<li>Strong understanding of SRE principles, including performance monitoring, incident response, and reliability engineering.</li>\n<li>Experience with cost optimization strategies for cloud infrastructure.</li>\n<li>Self-motivated and driven, with a strong ability to influence and drive changes across multiple teams.</li>\n<li>Ability to work collaboratively in an agile environment and support multiple teams.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with data lake architectures and big data processing frameworks (e.g., Apache Spark, Flink, Snowflake, BigQuery).</li>\n<li>Familiarity with event-driven architectures and message queues (e.g., Kafka, RabbitMQ).</li>\n<li>Experience with workflow orchestration tools such as Apache Airflow and Google Cloud Composer.</li>\n<li>Knowledge of service mesh technologies like Istio.</li>\n<li>Experience with GitOps workflows and Kubernetes-native tooling.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_68a62835-66b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8496473002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","GCP","Kubernetes","Terraform","Jenkins","ArgoCD","GitHub Enterprise Actions","Python","Apache Airflow","Google Cloud Composer","CloudWatch","Prometheus","Grafana","Datadog"],"x-skills-preferred":["Data lake architectures","Big data processing frameworks","Event-driven architectures","Message queues","Workflow orchestration tools","Service mesh technologies","GitOps workflows","Kubernetes-native tooling"],"datePosted":"2026-04-24T12:19:32.227Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS, GCP, Kubernetes, Terraform, Jenkins, ArgoCD, GitHub Enterprise Actions, Python, Apache Airflow, Google Cloud Composer, CloudWatch, Prometheus, Grafana, Datadog, Data lake architectures, Big data processing frameworks, Event-driven architectures, Message queues, Workflow orchestration tools, Service mesh technologies, GitOps workflows, Kubernetes-native tooling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_653bca90-18d"},"title":"Engineering Manager, Organizations (Auth0)","description":"<p>We are looking for an experienced Engineering Manager to lead our Organizations team. As an Engineering Manager, you will be responsible for managing a team of 9 remote engineers, mentoring and coaching them to achieve their goals. You will work closely with the Product Manager to plan and deliver the team&#39;s quarterly and annual roadmap. You will also be responsible for owning and being accountable for the quality of the team&#39;s technical estate, effectively managing technical debt, addressing security vulnerabilities, and ensuring wider cross-team technical initiatives are delivered in a timely manner.</p>\n<p>The ideal candidate will have experience growing engineers to the next level, bringing off-track engineers back on track, and working on projects that require close collaboration with external teams. They will also have solid architectural knowledge, backed by experience in designing, implementing, and evolving complex distributed systems.</p>\n<p>In particular, you will be able to spot areas where scalability and performance might be affected. You will know how to track and steer a project to successful and timely delivery. Experience in authentication protocols such as OAuth2, OIDC, SAML, and understanding of event-driven architectures, especially Apache Kafka, is a plus.</p>\n<p>As an Engineering Manager at Okta, you will have the opportunity to work on a wide range of challenging projects, collaborate with a talented team of engineers, and contribute to the growth and success of the company.</p>\n<p>If you are a motivated and experienced engineer looking for a new challenge, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_653bca90-18d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7843717","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$168,000-$231,000 CAD","x-skills-required":["NodeJS","JavaScript","TypeScript","PostgreSQL","AWS","Azure","Containers","Authentication protocols","Event-driven architectures"],"x-skills-preferred":["OAuth2","OIDC","SAML","Apache Kafka"],"datePosted":"2026-04-24T12:18:53.914Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"NodeJS, JavaScript, TypeScript, PostgreSQL, AWS, Azure, Containers, Authentication protocols, Event-driven architectures, OAuth2, OIDC, SAML, Apache Kafka","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":168000,"maxValue":231000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7cee676b-646"},"title":"Staff MLE","description":"<p>The Personalization team makes deciding what to play next easier and more enjoyable for every listener. From Blend to Discover Weekly, we’re behind some of Spotify’s most-loved features. We built them by understanding the world of music and podcasts better than anyone else.</p>\n<p>We are looking for a Staff MLE to join Surfaces Podcasts. The Surfaces Podcasts team builds the systems that power podcast recommendations across some of Spotify’s most visible experiences, including Home and the Now Playing view. We work across candidate generation, ranking, and embedding models to help listeners discover their favorite new podcast and engage deeply with their favorite shows.</p>\n<p>We’re also shaping the next generation of personalization through transformer-based models that bring more dynamic, context-aware recommendations to millions of listeners. You’ll collaborate closely with teams across Personalization, Experience, and the Podcast Mission to evolve podcast listening across Spotify.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Contribute to designing, scaling/building, evaluating, integrating, shipping, and refining reward signals for recommendations by hands-on ML development</li>\n</ul>\n<ul>\n<li>Promote and role-model best practices of ML systems development, testing, evaluation, etc., both inside the team as well as throughout the organization.</li>\n</ul>\n<ul>\n<li>Lead collaborations and align across PZN to integrate and A/B test mid-term signals in various recommendation systems</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>You have a strong background in machine learning, enjoy applying theory to develop real-world applications, with expertise in statistics and optimization, especially in sequential models, transformers, generative AI and large language models, and relevant fine-tuning processes.</li>\n</ul>\n<ul>\n<li>You have hands-on experience with large cross-collaborative machine learning projects and managing stakeholders.</li>\n</ul>\n<ul>\n<li>You have hands-on experience implementing production machine learning systems at scale in Java, Scala, Python, or similar languages. Experience with PyTorch, Ray, Hugging Face and related tools is required.</li>\n</ul>\n<ul>\n<li>You have some experience with large scale, distributed data processing frameworks/tools like Apache Beam, Apache Spark, or even our open source API for it - Scio, and cloud platforms like GCP or AWS.</li>\n</ul>\n<ul>\n<li>You care about agile software processes, data-driven development, reliability, and disciplined experimentation.</li>\n</ul>\n<p><strong>Where You’ll Be</strong></p>\n<ul>\n<li>We offer you the flexibility to work where you work best! For this role, you can be within North America as long as we have a work location.</li>\n</ul>\n<ul>\n<li>This team operates within the Eastern Standard time zone for collaboration</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>The United States base range for this position is $227,495- $324,993 equity. The benefits available for this position include health insurance, six month paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, paid sick leave.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7cee676b-646","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/3f816a31-2336-4e29-a5bf-6b147c604c2f","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$227,495-$324,993","x-skills-required":["machine learning","statistics","optimization","sequential models","transformers","generative AI","large language models","Java","Scala","Python","PyTorch","Ray","Hugging Face","Apache Beam","Apache Spark","Scio","GCP","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:17:41.574Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"North America"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, statistics, optimization, sequential models, transformers, generative AI, large language models, Java, Scala, Python, PyTorch, Ray, Hugging Face, Apache Beam, Apache Spark, Scio, GCP, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":227495,"maxValue":324993,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c0c30c21-9ae"},"title":"Staff Software Engineer, Data Engineering","description":"<p>You&#39;ll own Gamma&#39;s data infrastructure and architecture as we scale to hundreds of millions of users and petabytes of data. This means defining the technical strategy for our end-to-end event pipeline architecture, designing distributed systems that handle massive scale with reliability, and establishing the foundation for how data flows through Gamma.</p>\n<p>As a Staff Data Engineer, you&#39;ll balance hands-on engineering with technical leadership. You&#39;ll architect solutions for orders of magnitude growth, mentor engineers across the organization, and drive strategic decisions about our data stack. You&#39;ll work closely with analytics, product, and engineering leadership to enable data-driven decision making at scale while building systems that serve millions of users and inform critical business decisions.</p>\n<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own and evolve our end-to-end event pipeline architecture, from Kafka ingestion through Snowflake analytics, setting technical direction for data infrastructure</li>\n<li>Design and architect distributed data systems that scale to orders of magnitude more data volume while maintaining world-class query performance</li>\n<li>Lead initiatives to build and optimize CDC (change data capture) pipelines and streaming data transformations at massive scale</li>\n<li>Establish best practices for data quality, pipeline reliability, and system observability across the organization</li>\n<li>Drive strategic technical decisions about data modeling, infrastructure architecture, and technology choices</li>\n<li>Mentor engineers and elevate data engineering practices across analytics, product, and engineering teams</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>10+ years as a data or software engineer with deep expertise in distributed systems, data infrastructure, and high-growth SaaS products at massive scale</li>\n<li>Expert-level knowledge of Apache Kafka (producers, consumers, Kafka Connect, stream processing) and event streaming platforms</li>\n<li>Extensive hands-on experience with Snowflake, including performance optimization, cost management, and data modeling; strong foundation in Postgres, CDC patterns, and replication strategies</li>\n<li>Proven track record architecting and leading major data infrastructure initiatives through orders-of-magnitude growth</li>\n<li>Experience establishing best practices and driving technical strategy across organizations</li>\n<li>Strong communication skills with a history of influencing technical direction across engineering, analytics, and leadership</li>\n<li>Proficiency with dbt, Terraform, and working knowledge of data governance, privacy compliance (GDPR, CCPA), and security best practices</li>\n</ul>\n<p><strong>Compensation Range</strong></p>\n<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c0c30c21-9ae","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gamma","sameAs":"https://gamma.com","logo":"https://logos.yubhub.co/gamma.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/gamma/4b2c97d1-b12b-46b7-9e24-1fcd248e28a3","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"Full time","x-salary-range":"$230K - $310K","x-skills-required":["Apache Kafka","Snowflake","Postgres","dbt","Terraform","data governance","privacy compliance","security best practices"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:17:12.124Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Kafka, Snowflake, Postgres, dbt, Terraform, data governance, privacy compliance, security best practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ccc144a8-284"},"title":"Machine Learning Engineer","description":"<p>The Personalization team makes deciding what to play next easier and more enjoyable for every listener. We&#39;re behind some of Spotify&#39;s most-loved features, such as Blend and Discover Weekly. We built them by understanding the world of music and podcasts better than anyone else.</p>\n<p>We are looking for a Machine Learning Engineer to join the Personalization team. As an integral part of the squad, you will collaborate with research scientists, data scientists and other engineers across PZN in prototyping and productizing state-of-the-art ML at the intersection of recommendations and long-term user satisfaction.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Contribute to designing, scaling/building, evaluating, integrating, shipping, and refining reward signals for recommendations by hands-on ML development</li>\n<li>Promote and role-model best practices of ML systems development, testing, evaluation, etc., both inside the team as well as throughout the organization</li>\n<li>Lead collaborations and align across PZN to integrate and A/B test mid-term signals in various recommendation systems</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong background in machine learning, with expertise in statistics and optimization, especially in sequential models, transformers, generative AI and large language models, and relevant fine-tuning processes</li>\n<li>Hands-on experience with large cross-collaborative machine learning projects and managing stakeholders</li>\n<li>Hands-on experience implementing production machine learning systems at scale in Java, Scala, Python, or similar languages. Experience with PyTorch, Ray, Hugging Face and related tools is required</li>\n<li>Some experience with large scale, distributed data processing frameworks/tools like Apache Beam, Apache Spark, or even our open source API for it - Scio, and cloud platforms like GCP or AWS</li>\n<li>Care about agile software processes, data-driven development, reliability, and disciplined experimentation</li>\n</ul>\n<p><strong>Where You&#39;ll Be</strong></p>\n<ul>\n<li>We offer you the flexibility to work where you work best! For this role, you can be within the North America and EMEA region as long as we have a work location</li>\n<li>This team operates within the Eastern Standard time zone for collaboration</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>The United States base range for this position is $227,495-$324,993 equity. The benefits available for this position include health insurance, six month paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, paid sick leave.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ccc144a8-284","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/f3616bfc-a2bb-4847-90e1-0437b8a1c054","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$227,495-$324,993","x-skills-required":["machine learning","statistics","optimization","sequential models","transformers","generative AI","large language models","PyTorch","Ray","Hugging Face","Apache Beam","Apache Spark","Scio","GCP","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:16:59.999Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"EMEA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, statistics, optimization, sequential models, transformers, generative AI, large language models, PyTorch, Ray, Hugging Face, Apache Beam, Apache Spark, Scio, GCP, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":227495,"maxValue":324993,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cecd01f7-106"},"title":"Machine Learning Engineer","description":"<p>The Personalization team makes deciding what to play next easier and more enjoyable for every listener. We&#39;re behind some of Spotify&#39;s most-loved features, such as Blend and Discover Weekly. We built them by understanding the world of music and podcasts better than anyone else.</p>\n<p>We are looking for a Machine Learning Engineer to join the Personalization team. As an integral part of the squad, you will collaborate with research scientists, data scientists and other engineers across PZN in prototyping and productizing state-of-the-art ML at the intersection of recommendations and long-term user satisfaction.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Contribute to designing, scaling/building, evaluating, integrating, shipping, and refining reward signals for recommendations by hands-on ML development</li>\n<li>Promote and role-model best practices of ML systems development, testing, evaluation, etc., both inside the team as well as throughout the organization</li>\n<li>Lead collaborations and align across PZN to integrate and A/B test mid-term signals in various recommendation systems</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong background in machine learning, with expertise in statistics and optimization, especially in sequential models, transformers, generative AI and large language models, and relevant fine-tuning processes</li>\n<li>Hands-on experience with large cross-collaborative machine learning projects and managing stakeholders</li>\n<li>Hands-on experience implementing production machine learning systems at scale in Java, Scala, Python, or similar languages. Experience with PyTorch, Ray, Hugging Face and related tools is required</li>\n<li>Some experience with large scale, distributed data processing frameworks/tools like Apache Beam, Apache Spark, or even our open source API for it - Scio, and cloud platforms like GCP or AWS</li>\n<li>Care about agile software processes, data-driven development, reliability, and disciplined experimentation</li>\n</ul>\n<p><strong>Where You&#39;ll Be</strong></p>\n<ul>\n<li>We offer you the flexibility to work where you work best! For this role, you can be within the North America and EMEA region as long as we have a work location</li>\n<li>This team operates within the Eastern Standard time zone for collaboration</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>The United States base range for this position is $227,495- $324,993 equity. The benefits available for this position include health insurance, six month paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, paid sick leave.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cecd01f7-106","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/736f1827-6b26-4b3b-b8d8-1d754296e033","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$227,495-$324,993","x-skills-required":["machine learning","statistics","optimization","sequential models","transformers","generative AI","large language models","PyTorch","Ray","Hugging Face","Apache Beam","Apache Spark","Scio","GCP","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:16:51.109Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"EMEA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, statistics, optimization, sequential models, transformers, generative AI, large language models, PyTorch, Ray, Hugging Face, Apache Beam, Apache Spark, Scio, GCP, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":227495,"maxValue":324993,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3513ac8f-9c4"},"title":"Staff Software Engineer, PostgreSQL","description":"<p>You&#39;ll own Gamma&#39;s PostgreSQL infrastructure as we scale from 70 million users to hundreds of millions, and from terabytes of data to hundreds of terabytes. Your job is to make sure our database can handle orders of magnitude more usage without compromising performance.</p>\n<p>This is a deeply technical, hands-on role. You&#39;ll read and write code daily, dig into low-level systems, debug complex issues across massive datasets, and work on both core database scaling projects and application features. You&#39;ll collaborate closely with backend engineers, data engineers, and infrastructure teams to ensure our database architecture keeps pace with Gamma&#39;s growth.</p>\n<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Architect and implement solutions for horizontally scaling PostgreSQL to hundreds of millions of users and hundreds of terabytes of data</li>\n</ul>\n<ul>\n<li>Own database performance, availability, and reliability as usage grows by orders of magnitude</li>\n</ul>\n<ul>\n<li>Debug complex issues across very large datasets and optimize query performance at scale</li>\n</ul>\n<ul>\n<li>Establish best practices for database design, query optimization, and data modeling across engineering</li>\n</ul>\n<ul>\n<li>Work across core infrastructure and application features that depend on database architecture</li>\n</ul>\n<ul>\n<li>Collaborate with backend, data, and infrastructure engineers to align database strategy with product needs</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>10+ years of software engineering experience with deep expertise in large-scale relational database systems, including hands-on experience managing hundreds of terabytes of data in production</li>\n</ul>\n<ul>\n<li>Expert-level understanding of PostgreSQL (or comparable relational databases), horizontal scaling techniques such as sharding and partitioning, and complex query tuning</li>\n</ul>\n<ul>\n<li>Strong programming skills in at least one backend language, with experience writing and maintaining highly available web APIs</li>\n</ul>\n<ul>\n<li>Experience with large-scale event streaming systems, preferably Apache Kafka</li>\n</ul>\n<ul>\n<li>Ability to explain complex technical concepts clearly to engineers across teams</li>\n</ul>\n<ul>\n<li>Familiarity with TypeScript, Prisma, Apollo GraphQL, Terraform, AWS, or AI/LLM tooling (Nice to have)</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3513ac8f-9c4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gamma","sameAs":"https://gamma.com","logo":"https://logos.yubhub.co/gamma.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/gamma/f672c729-457f-4143-80e9-363ddf8a0870","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"Full time","x-salary-range":"$230K - $310K","x-skills-required":["PostgreSQL","horizontal scaling","sharding","partitioning","complex query tuning","backend language","web APIs","Apache Kafka"],"x-skills-preferred":["TypeScript","Prisma","Apollo GraphQL","Terraform","AWS","AI/LLM tooling"],"datePosted":"2026-04-24T12:16:45.597Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, horizontal scaling, sharding, partitioning, complex query tuning, backend language, web APIs, Apache Kafka, TypeScript, Prisma, Apollo GraphQL, Terraform, AWS, AI/LLM tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_74402ab5-601"},"title":"Senior Machine Learning Engineer - Ads R&D","description":"<p>Our mission on the Advertising Product &amp; Technology team is to build a next-generation advertising platform that aligns with our unique value proposition for audio and video. We work to scale the user experience for hundreds of millions of fans and hundreds of thousands of advertisers. This scale brings unique challenges as well as tremendous opportunities for our artists and creators.</p>\n<p>We are seeking a Senior Machine Learning Engineer to join the Supply Personalization squad. Supply Personalization focuses on optimizing the volume, timing, and types of ad loads a user receives. By leveraging data, machine learning, causal inference, and large-scale online experimentation, we aim to uncover and learn the most effective strategies for enhancing user experiences and driving business outcomes.</p>\n<p>As a Senior Machine Learning Engineer, you will design and implement machine learning systems for ad performance optimization. You will research and apply ML optimization strategies to balance multiple objectives effectively. You will analyze data and use machine learning techniques to understand user behavior and improve ad experiences. You will collaborate with backend engineers, data scientists, data engineers, and product managers to establish baselines, inform product decisions, and develop new technologies.</p>\n<p>The ideal candidate will have professional experience in applied machine learning. They will have strong technical expertise in software engineering, data analysis, and machine learning. They will be proficient in programming languages such as Python, Java, or Scala. They will have experience with TensorFlow or PyTorch and working with various aspects of the ML lifecycle. They will also have expertise in developing data pipelines using tools like Apache Beam or Spark.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_74402ab5-601","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/6236f25f-f9cc-47c2-af7b-4ace57332eeb","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"permanent","x-salary-range":"$184,050.00 - $262,928.00","x-skills-required":["machine learning","software engineering","data analysis","Python","Java","Scala","TensorFlow","PyTorch","Apache Beam","Spark"],"x-skills-preferred":["LLMs","Ray","Adtech","Recommender Systems"],"datePosted":"2026-04-24T12:16:04.900Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"jobLocationType":"TELECOMMUTE","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, software engineering, data analysis, Python, Java, Scala, TensorFlow, PyTorch, Apache Beam, Spark, LLMs, Ray, Adtech, Recommender Systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":184050,"maxValue":262928,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fdb57476-4a9"},"title":"Backend Engineer","description":"<p>The Personalization team makes deciding what to play next easier and more enjoyable for every listener. From Blend to Discover Weekly, we&#39;re behind some of Spotify&#39;s most-loved features. We built them by understanding the world of music and podcasts better than anyone else.</p>\n<p>Join us and you&#39;ll keep millions of users listening by making great recommendations to each and every one of them.</p>\n<p>You&#39;ll join a team working at the intersection of backend engineering, music understanding, and user experience. We focus on building the backend systems that power agentic music fulfilment products from conversational playlist generation to adaptive listening experiences that give users more intuitive control over what they listen to.</p>\n<p>This team collaborates closely with product, design, user research, data science, and machine learning to build personalized, high-impact features used by hundreds of millions of listeners worldwide.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and ship backend services that power LLM-based music fulfilment experiences, giving users more adaptive control over their listening</li>\n</ul>\n<ul>\n<li>Build and maintain the APIs and distributed systems behind prompted playlist experiences, session generation, and agentic music products</li>\n</ul>\n<ul>\n<li>Collaborate with cross-functional partners across user research, design, data science, product, and ML engineering to build new product features that connect artists and fans in personalized and meaningful ways</li>\n</ul>\n<ul>\n<li>Be a technical leader and valued contributor in an autonomous, cross-functional agile team</li>\n</ul>\n<ul>\n<li>Prototype new approaches and productionize solutions at scale for hundreds of millions of active users</li>\n</ul>\n<ul>\n<li>Contribute to the Spotify-wide backend developer community, affecting and driving architecture across the company</li>\n</ul>\n<ul>\n<li>Promote best practices in backend system design, testing, and deployment across the organization</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>You are an experienced backend engineer who enjoys solving complex real-world problems in a fast-paced, collaborative environment</li>\n</ul>\n<ul>\n<li>You have experience working directly with stakeholders to understand, document, and develop APIs and systems to meet their requirements, driving increased adoption and reducing reliance on custom one-off implementations</li>\n</ul>\n<ul>\n<li>You have experience writing distributed, high-volume services and know how to deploy and keep them running in production</li>\n</ul>\n<ul>\n<li>You have a deep understanding of system design, data structures, and algorithms</li>\n</ul>\n<ul>\n<li>You are comfortable working with LLM-based systems and building the backend infrastructure that supports them</li>\n</ul>\n<ul>\n<li>You have experience with large-scale distributed data processing tools such as Apache Beam or Apache Spark</li>\n</ul>\n<ul>\n<li>You have worked with cloud platforms like GCP or AWS</li>\n</ul>\n<ul>\n<li>You love working in an environment where you constantly experiment and iterate quickly</li>\n</ul>\n<ul>\n<li>You believe data is the most powerful tool for informed decision-making</li>\n</ul>\n<ul>\n<li>You care about quality and you know what it means to ship high-quality code</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Health insurance</li>\n</ul>\n<ul>\n<li>Six month paid parental leave</li>\n</ul>\n<ul>\n<li>401(k) retirement plan</li>\n</ul>\n<ul>\n<li>Monthly meal allowance</li>\n</ul>\n<ul>\n<li>23 paid days off</li>\n</ul>\n<ul>\n<li>13 paid flexible holidays</li>\n</ul>\n<ul>\n<li>Paid sick leave</li>\n</ul>\n<p><strong>Salary</strong></p>\n<p>The United States base range for this position is $160,091 - $228,702 plus equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fdb57476-4a9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/ab6947fc-adc4-41db-ad11-8fae741ceff0","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,091 - $228,702","x-skills-required":["backend engineering","music understanding","user experience","LLM-based systems","Apache Beam","Apache Spark","GCP","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:15:59.803Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"North America"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend engineering, music understanding, user experience, LLM-based systems, Apache Beam, Apache Spark, GCP, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160091,"maxValue":228702,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6ecebedb-31e"},"title":"Member of Technical Staff - Data Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>\n<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>\n<li>Ship high-quality, well-tested, secure, and maintainable code.</li>\n<li>Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively.</li>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</li>\n<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>\n<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>\n<li>3+ years experience with data governance, data compliance and/or data security.</li>\n<li>2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP.</li>\n<li>Extensive use datastores like RDBMS, key-value stores, etc.</li>\n<li>2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking.</li>\n<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>\n<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>\n<li>Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers.</li>\n<li>Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders.</li>\n<li>Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI.</li>\n<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>\n</ul>\n<p>#mai-datainsights #mai-datainsights</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6ecebedb-31e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineer-5/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["Python","Java","Spark","SQL","Apache Hadoop","Kafka","NoSQL","Azure","AWS","GCP","RDBMS","key-value stores"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:15:55.844Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, Azure, AWS, GCP, RDBMS, key-value stores","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f0a9221-9cb"},"title":"Data Engineer","description":"<p>You&#39;ll join the Data Collection Product Area within our Platform mission, where we build and operate the systems that power how data flows across Spotify. Our team develops the core event delivery infrastructure that enables hundreds of teams to collect and use data at massive scale. Every day, we support the delivery of trillions of events that help shape Spotify&#39;s products and unlock new innovations for creators and listeners alike.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and improve the infrastructure that powers Spotify&#39;s event delivery systems at global scale</li>\n</ul>\n<ul>\n<li>Develop backend services using Java and Apollo, and build batch and real-time data pipelines using tools like Scio and Apache Beam</li>\n</ul>\n<ul>\n<li>Work closely with your squad to ensure systems are reliable, efficient, and continuously evolving to meet user needs</li>\n</ul>\n<ul>\n<li>Take shared ownership of operational responsibilities, including monitoring, troubleshooting, and improving system health</li>\n</ul>\n<ul>\n<li>Collaborate with other teams across the Data Platform and broader R&amp;D organization to deliver impactful data solutions</li>\n</ul>\n<ul>\n<li>Contribute to evolving our use of cloud technologies such as Google Cloud Pub/Sub, GKE, and Dataflow</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience building backend systems and data pipelines using Java and Scala</li>\n</ul>\n<ul>\n<li>Understanding of distributed systems and comfort working with large-scale, cloud-based infrastructure</li>\n</ul>\n<ul>\n<li>Solid foundation in system design, data structures, and algorithms</li>\n</ul>\n<ul>\n<li>Experience with modern data processing frameworks such as Apache Beam or similar technologies</li>\n</ul>\n<ul>\n<li>Care about building reliable systems and familiarity with continuous integration and delivery practices</li>\n</ul>\n<ul>\n<li>Curiosity and motivation to solve complex technical challenges in high-scale environments</li>\n</ul>\n<ul>\n<li>Ability to collaborate effectively with others and value open feedback and continuous learning</li>\n</ul>\n<ul>\n<li>Comfortable working in agile teams and contributing to a culture of experimentation and improvement</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f0a9221-9cb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/baa87498-b0a3-4ac5-b197-a224e93c8a07","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","Apache Beam","Scio","Google Cloud Pub/Sub","GKE","Dataflow"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:15:10.236Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, Apache Beam, Scio, Google Cloud Pub/Sub, GKE, Dataflow"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8797f9b8-aca"},"title":"Principal Software Engineer","description":"<p>Join Microsoft AI&#39;s Copilot Discover Engineering Team as a Principal Software Engineer, serving as a senior technical architect to play a central role in the technical direction and long-range architecture of Copilot Discover.</p>\n<p>This is a role emphasizing true end-to-end responsibility: setting the architectural vision, shaping the platform for AI-forward discovery experiences, and steering the evolution of product experiences that sit at the heart of how users engage with the intersection of knowledge, content, and personalization on surfaces on which Copilot shows up.</p>\n<p>You will design and drive the systems that power the Copilot Discover feed at scale. You&#39;ll work on foundational platforms that ingest, enrich, rank, personalize, and serve content across web, mobile, and partner surfaces and lead architectural strategy for how we unify signals, models, and data into coherent, trustworthy experiences; modernize our ranking and personalization stack; and build the AI-forward infrastructure that makes Copilot Discover feel intelligent, anticipatory, and personalized for every user.</p>\n<p>The key is an end-to-end focus on outcomes, across a broad technical space. You&#39;ll be expected to influence platform direction across multiple teams and adjacent organizations. You will ensure that the MSN and Copilot Discover systems are robust, scalable, privacy-respecting, and engineered for long-term adaptation.</p>\n<p>High product sense is a success factor – how you drive product and architectural convergence across today&#39;s fragmented surfaces, reduce complexity, and shape a consistent platform model is key to the success of the product and this role.</p>\n<p>Copilot Discover sits at the intersection of content, signals, and user intent. Our ambition is to make it a durable, strategic layer that powers intelligent, personalized, and trusted discovery experiences across a broad array of surfaces where Microsoft engages consumers in their journeys.</p>\n<p>If you are passionate about building high-scale, AI-driven systems that combine solid architectural rigor with meaningful user value, this is the role for you.</p>\n<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<p>Own the technical direction for Copilot Discover platforms, setting end-to-end architectural strategy.</p>\n<p>Partner with product, design, data science, and engineering leaders to translate business and user needs into executable architectural plans, well-documented designs, and multi-year roadmaps.</p>\n<p>Set and govern architectural decisions across multiple services and teams, ensuring systems are scalable, secure, reliable, cost-efficient, and grounded in data, telemetry and operational excellence.</p>\n<p>Raise the technical bar across the organization by establishing flasifible principles, reviewing critical designs, and helping to develop technical leaders within the team.</p>\n<p>Establish and evolve quality and reliability standards, including test strategies, CI/CD practices, monitoring, alerting, and live-site health.</p>\n<p>Shape the adoption of AI/ML techniques for content understanding, personalization, summarization, and safety, in close collaboration with MAI and partner teams.</p>\n<p>Serve as a cross-org technical leader, aligning MSN architecture with Bing, Copilot, Ads, Privacy, Trust, and other Microsoft platforms.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 15+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Experience in ML/AI systems, especially in content understanding, ranking, or personalization.</p>\n<p>Proven experience designing and operating large-scale distributed systems, including data pipelines, microservices, APIs, and storage systems.</p>\n<p>Experience with content platforms, personalization systems, or consumer-facing services at scale.</p>\n<p>Experience with technologies such as Apache Spark, Kafka, columnar storage, data modeling, and schema evolution.</p>\n<p>Demonstrated success as a technical lead or architect, influencing across teams without direct authority.</p>\n<p>Solid understanding of system architecture, performance tuning, telemetry design, and operational excellence.</p>\n<p>Experience in ML/AI systems, especially in content understanding, ranking, or personalization.</p>\n<p>Excellent analytical and communication skills, with the ability to clearly articulate complex technical concepts.</p>\n<p>Solid cross-organizational collaboration skills and the ability to influence senior stakeholders.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8797f9b8-aca","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-51/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$163,000 - $296,400 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Spark","Kafka","columnar storage","data modeling","schema evolution"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:14:00.100Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Spark, Kafka, columnar storage, data modeling, schema evolution","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":163000,"maxValue":296400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_28608bb0-b72"},"title":"Software Engineer - Full Stack","description":"<p>Help millions of people find the right local businesses and services at the moments that matter most. At Bing Places, we build the systems that power local discovery across Microsoft experiences. You’ll work at the intersection of engineering, data, and product to improve the quality, relevance, and trustworthiness of local search at global scale.</p>\n<p>In this role, you’ll build and operate scalable systems that power accurate and trustworthy local search experiences across Microsoft. As a Software Engineer II on Bing Places, you’ll collaborate with engineers, data scientists, and product partners to integrate diverse data sources, improve ranking quality, and ship features used by millions of customers.</p>\n<p>The role offers solid growth opportunities as you deepen your expertise in distributed systems, geospatial data. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<p>Contribute to architecture, engineering standards, and development practices across the team.</p>\n<p>Work with appropriate stakeholders to determine user requirements for a set of features.</p>\n<p>Contribute to the identification of dependencies, and the development of design documents for a product area with little oversight.</p>\n<p>Create and implement code for a product, service, or feature, reusing code as applicable.</p>\n<p>Contribute to efforts to break down larger work items into smaller work items and provides estimation.</p>\n<p>Act as a Designated Responsible Individual (DRI) working on-call to monitor system/product feature/service for degradation, downtime, or interruptions and gain approval to restore system/product/service for simple problems.</p>\n<p>Remain current in skills by investing time and effort into staying abreast of current developments that will improve the availability, reliability, efficiency, observability, and performance of products while also driving consistency in monitoring and operations at scale.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 3+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>1+ with data engineering leveraging tools such as Apache Hadoop or Spark or equivalent experience.</p>\n<p>Experience with Azure Cloud, Azure Data Factory (ADF) 3+ years of experience in solving, design, coding, and debugging skills.</p>\n<p>Demonstrated experience with products that involve high availability/reliability and low latency systems.</p>\n<p>#MicrosoftAI</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_28608bb0-b72","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/software-engineer-full-stack-2/","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Base pay range for this role across the U.S. is USD $100,600 – $199,000 per year.","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Hadoop","Spark","Azure Cloud","Azure Data Factory (ADF)"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:13:33.845Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Hadoop, Spark, Azure Cloud, Azure Data Factory (ADF)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100600,"maxValue":199000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6c7ddfe8-e54"},"title":"Solutions Architect (Greater China Region)","description":"<p>At Databricks, our core principles are at the heart of everything we do; creating a culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone.</p>\n<p>We aim to inspire our customers to make informed decisions that push their business forward. We provide a user-friendly and intuitive platform that makes it easy to turn insights into action and fosters a culture of creativity, experimentation, and continuous improvement.</p>\n<p>As a Solutions Architect in the Greater China Region, you will be an essential part of this mission, using your technical expertise to demonstrate how our Data Intelligence Platform can help customers solve their complex data challenges.</p>\n<p>You&#39;ll work with a collaborative, customer-focused team that values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>\n<p>Join us in our quest to change how people work with data and make a better world!</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Form successful relationships with clients in the Greater China Region to provide technical and business value in collaboration with Account Executives.</li>\n</ul>\n<ul>\n<li>Operate as an expert in big data analytics to excite customers about Databricks.</li>\n</ul>\n<ul>\n<li>Develop into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</li>\n</ul>\n<ul>\n<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>\n</ul>\n<ul>\n<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions.</li>\n</ul>\n<ul>\n<li>Develop customer relationships and build internal partnerships with account executives and teams.</li>\n</ul>\n<ul>\n<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Apache Spark.</li>\n</ul>\n<ul>\n<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s).</li>\n</ul>\n<ul>\n<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth.</li>\n</ul>\n<ul>\n<li>Business proficiency in Mandarin and experience in the Greater China Region are required to enable effective collaboration and understanding of client needs.</li>\n</ul>\n<p>The successful candidate will engage with the Greater China Region customers in Mandarin for technical sales discussions, address technical challenges, and articulate clear technical solutions and value propositions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6c7ddfe8-e54","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8499584002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","Apache Spark","Big Data Analytics","Mandarin"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:48.156Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, Apache Spark, Big Data Analytics, Mandarin"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9b1d250c-732"},"title":"Senior Applied Scientist","description":"<p>Conversational commerce introduces challenges that differ from traditional web shopping. Preferences emerge through dialogue, expectations for accuracy and trust are high, and systems must reason over context and frequently changing commerce data. Microsoft Copilot is building shopping experiences that are conversational, proactive, and trustworthy. As a Senior Applied Scientist, you will lead the development of machine learning and generative AI systems that power product discovery, ranking, personalization, and reasoning across Copilot shopping surfaces.</p>\n<p>This role sits at the intersection of applied machine learning, generative AI, and product experience, with clear ownership of core shopping intelligence used directly in user-facing Copilot experiences. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, build, and productionize machine learning models for product discovery, ranking, recommendation, and personalization using large-scale commerce and behavioral data.</li>\n<li>Develop LLM-based systems for conversational shopping, including prompt design, retrieval-augmented generation, tool orchestration, and grounding against structured commerce data.</li>\n<li>Address quality and trust challenges such as hallucination risk, stale data, and recommendation reliability.</li>\n<li>Define evaluation frameworks and experimentation strategies for conversational and proactive shopping scenarios, including offline metrics and online experiments.</li>\n<li>Partner closely with product, engineering, and design teams to translate models into low-latency, reliable Copilot experiences.</li>\n<li>Provide technical leadership for applied science within Copilot Shopping through design reviews, mentoring, and setting quality standards.</li>\n<li>Contribute to model governance and Responsible AI practices to ensure trustworthy and compliant systems.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics predictive analytics, research) OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience.</li>\n<li>3+ years of hands-on experience developing machine learning or statistical models to solve real-world problems (in industry or academic projects), including building and validating algorithms such as regressions, classifiers, or clustering models.</li>\n<li>Proficiency in programming for data science (e.g. using Python or R for data analysis and modeling) and experience with data querying languages (e.g. SQL).</li>\n<li>Big Data &amp; Distributed Computing: Hands-on experience with large-scale data processing using tools like Apache Spark or Azure Databricks for training and inference workflows.</li>\n<li>Advanced Analytics: Skilled in time-series analysis and anomaly detection techniques (e.g., ARIMA, isolation forests) applied to business contexts for actionable insights.</li>\n<li>LLMs &amp; Domain Adaptation: Practical experience with prompt engineering, fine-tuning GPT-like models, and applying LLMs in domain-heavy areas (healthcare, agriculture, social sciences) while ensuring privacy and Responsible AI compliance.</li>\n</ul>\n<p>Experience Level: senior Employment Type: full-time Workplace Type: hybrid Category: Engineering Industry: Technology Salary Range: $119,800 - $234,700 per year Salary Min: 119800 Salary Max: 234700 Salary Currency: USD Salary Period: year Required Skills: [&quot;machine learning&quot;, &quot;generative AI&quot;, &quot;product discovery&quot;, &quot;ranking&quot;, &quot;personalization&quot;, &quot;reasoning&quot;, &quot;Apache Spark&quot;, &quot;Azure Databricks&quot;, &quot;Python&quot;, &quot;R&quot;, &quot;SQL&quot;, &quot;time-series analysis&quot;, &quot;anomaly detection&quot;] Preferred Skills: [&quot;prompt engineering&quot;, &quot;fine-tuning GPT-like models&quot;, &quot;LLMs in domain-heavy areas&quot;]</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9b1d250c-732","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-applied-scientist-56/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,800 - $234,700 per year","x-skills-required":["machine learning","generative AI","product discovery","ranking","personalization","reasoning","Apache Spark","Azure Databricks","Python","R","SQL","time-series analysis","anomaly detection"],"x-skills-preferred":["prompt engineering","fine-tuning GPT-like models","LLMs in domain-heavy areas"],"datePosted":"2026-04-24T12:12:41.295Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, generative AI, product discovery, ranking, personalization, reasoning, Apache Spark, Azure Databricks, Python, R, SQL, time-series analysis, anomaly detection, prompt engineering, fine-tuning GPT-like models, LLMs in domain-heavy areas","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_492042ed-9ee"},"title":"Member of Technical Staff - Data Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>\n<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>\n<li>Ship high-quality, well-tested, secure, and maintainable code.</li>\n<li>Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively.</li>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</li>\n<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>\n<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>\n<li>3+ years experience with data governance, data compliance and/or data security.</li>\n<li>2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP.</li>\n<li>Extensive use datastores like RDBMS, key-value stores, etc.</li>\n<li>2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking.</li>\n<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>\n<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>\n<li>Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers.</li>\n<li>Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders.</li>\n<li>Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI.</li>\n<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_492042ed-9ee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineer-6/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 - $274,800 per year","x-skills-required":["Python","Java","Spark","SQL","Apache Hadoop","Kafka","NoSQL","data governance","data compliance","data security","Azure","AWS","GCP","RDBMS","key-value stores"],"x-skills-preferred":["distributed systems","containerization","networking","web development","AI"],"datePosted":"2026-04-24T12:11:56.893Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, data governance, data compliance, data security, Azure, AWS, GCP, RDBMS, key-value stores, distributed systems, containerization, networking, web development, AI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a8f02572-a83"},"title":"Data & AI Platform Architect (Professional Services)","description":"<p>You will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Extensive experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n<li>Documentation and white-boarding skills</li>\n<li>Experience working with clients and managing conflicts</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n<li>Travel to customers 10% of the time</li>\n</ul>\n<p>About Databricks:</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a8f02572-a83","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8462016002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-24T12:11:38.006Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam, Netherlands"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3c9b96bf-348"},"title":"Software Engineer II","description":"<p>Imagine helping millions of users discover the best local businesses and services,right when they need them. At Bing Places, we’re on a mission to improve the quality and relevance of local search results across Microsoft platforms. You’ll be part of a team that blends data science, engineering, and product thinking to deliver intelligent, high-impact experiences that shape how people interact with the world around them.</p>\n<p>As a Software Engineer II in Bing Places, you will design and build scalable systems that enhance the accuracy, freshness, and trustworthiness of local search results. You’ll collaborate across disciplines to integrate diverse data sources, develop intelligent ranking algorithms, and ship features that directly impact millions of users. This opportunity will allow you to accelerate your career growth, deepen your understanding of geospatial and business data, and sharpen your skills in distributed systems and machine learning.</p>\n<p>We offer flexible work arrangements, including partial work-from-home options. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3c9b96bf-348","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/software-engineer-ii-19/","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$100,600 - $199,000 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Hadoop","Spark"],"x-skills-preferred":["Azure Cloud","Azure Data Factory (ADF)","Azure Machine Learning (AML)"],"datePosted":"2026-04-24T12:11:28.826Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Hadoop, Spark, Azure Cloud, Azure Data Factory (ADF), Azure Machine Learning (AML)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100600,"maxValue":199000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e4fc0509-d39"},"title":"Resident Solutions Architect","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the Sr. Manager, Professional Services.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include building reference architectures, how-to&#39;s and production grade MVPs / Greenfield projects</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement strategic customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>8+ years experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native and Data Lakes in a customer-facing post-sales, technical architecture or consulting role</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Travel to customers 20 - 30% of the time</li>\n</ul>\n<p>Nice to have: Databricks Certification</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e4fc0509-d39","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8514430002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Cloud Native","Data Lakes","Python","Scala","Technical project delivery","Documentation and white-boarding skills","Experience working with clients and managing conflicts"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:11:25.394Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Western Australia, Australia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Cloud Native, Data Lakes, Python, Scala, Technical project delivery, Documentation and white-boarding skills, Experience working with clients and managing conflicts"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2fa970ee-3db"},"title":"Member of Technical Staff - Data Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities: Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. Ship high-quality, well-tested, secure, and maintainable code. Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. Enjoy working in a fast-paced, design-driven, product development cycle. Embody our Culture and Values.</p>\n<p>Qualifications: Required Qualifications: Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience. Preferred Qualifications: 4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL. Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc. 3+ years experience with data governance, data compliance and/or data security. 2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP. Extensive use datastores like RDBMS, key-value stores, etc. 2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking. Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience. Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security. Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers. Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders. Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI. Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2fa970ee-3db","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineer-4/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["Python","Java","Spark","SQL","Apache Hadoop","Kafka","NoSQL","data governance","data compliance","data security","Azure","AWS","GCP","RDBMS","key-value stores"],"x-skills-preferred":["distributed systems","containerization","networking","web development","AI"],"datePosted":"2026-04-24T12:11:05.400Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, data governance, data compliance, data security, Azure, AWS, GCP, RDBMS, key-value stores, distributed systems, containerization, networking, web development, AI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_40513a9a-d3f"},"title":"AI Engineer - FDE (Forward Deployed Engineer)","description":"<p>The AI Forward Deployed Engineering (AI FDE) team is a highly specialized customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams.</p>\n<p>We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems</li>\n<li>Own production rollouts of consumer and internally facing GenAI applications</li>\n<li>Serve as a trusted technical advisor to customers across a variety of domains</li>\n<li>Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally</li>\n<li>Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy</li>\n<li>Expertise in deploying production-grade GenAI applications, including evaluation and optimizations</li>\n<li>Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc.</li>\n<li>Experience building production-grade machine learning deployments on AWS, Azure, or GCP</li>\n<li>Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience</li>\n<li>Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike</li>\n<li>Passion for collaboration, life-long learning, and driving business value through AI</li>\n<li>[Preferred] Experience using the Databricks Intelligence Platform and Apache Spark to process large-scale distributed datasets</li>\n<li>We require fluency in English and have a preference for candidates who also speak Mandarin</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_40513a9a-d3f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8503080002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["GenAI","HuggingFace","LangChain","DSPy","pandas","scikit-learn","PyTorch","AWS","Azure","GCP","Apache Spark"],"x-skills-preferred":["Databricks Intelligence Platform","Mosaic AI research"],"datePosted":"2026-04-24T12:11:02.068Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GenAI, HuggingFace, LangChain, DSPy, pandas, scikit-learn, PyTorch, AWS, Azure, GCP, Apache Spark, Databricks Intelligence Platform, Mosaic AI research"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7770176b-afc"},"title":"Sr. Solutions Engineer France","description":"<p>At Databricks, our core values are at the heart of everything we do. Our culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone.</p>\n<p>We aim to inspire our customers to make informed decisions that push their business forward. We provide a user-friendly and intuitive platform that makes it easy to turn insights into action and fosters a culture of creativity, experimentation, and continuous improvement.</p>\n<p>As a Sr. Solutions Engineer, you will be an essential part of this mission, using your technical expertise to demonstrate how our Data and Intelligence Platform can help customers solve their complex data challenges.</p>\n<p>You&#39;ll work with a collaborative, customer-focused team that values innovation and creativity. You&#39;ll use your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>\n<p>Join us in our quest to change how people work with data and make a better world!</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Form successful relationships with clients throughout your assigned territory, providing technical and business value to Databricks customers in collaboration with Account Executives.</li>\n</ul>\n<ul>\n<li>Operate as an expert in big data analytics to excite customers about Databricks. You will develop into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</li>\n</ul>\n<ul>\n<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>\n</ul>\n<ul>\n<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions.</li>\n</ul>\n<ul>\n<li>Passion for delivering technical propositions, identifying customers&#39; pain points and explaining essential areas for business value to develop a trusted advisor skillset.</li>\n</ul>\n<ul>\n<li>Knowledgeable in a core Big Data Analytics domain with some exposure to advanced Data Engineering and/or Data science use cases.</li>\n</ul>\n<ul>\n<li>Experience diving deeper into solution architecture and expertise with at least one major public cloud platform.</li>\n</ul>\n<ul>\n<li>Code in a core programming language like Python, Java, or Scala.</li>\n</ul>\n<ul>\n<li>A foundational understanding of Apache Spark architecture is preferable; hands-on skills will benefit the role.</li>\n</ul>\n<p>Notes on mandatory requirements:</p>\n<ul>\n<li>Flexibility to travel (up to 20-30% as required for customer meetings, events, and training).</li>\n</ul>\n<ul>\n<li>Business proficiency in French and English is required. Fluency in additional regional languages may be advantageous.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7770176b-afc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8452392002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Data Engineering","Data Science","Apache Spark","Python","Java","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:09:51.671Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris, France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Data Engineering, Data Science, Apache Spark, Python, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a7186358-298"},"title":"Sr. Manager, Field Engineering - Agencies","description":"<p>We are seeking a dynamic Sr. Manager, Field Engineering - Agencies to lead a team of Solution Architects in our Agency segment. As a key member of our Field Engineering team, you will be responsible for driving the technical success of our customers in the Agencies vertical. This includes hiring, training, and growing a team of Solutions Architects, making customers successful with Databricks, and establishing relationships across the business to ensure customer and team success.</p>\n<p>The Agencies vertical sits at the intersection of data, AI, and advertising. You will lead a team that works with some of the most data-intensive companies in the world, holding companies managing billions in media spend and influencing the broader buy and sell side ecosystem.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Hiring, training, and growing a team of Solutions Architects</li>\n<li>Making customers successful with Databricks</li>\n<li>Establishing relationships across the business to ensure customer and team success</li>\n<li>Partnering with sales leadership to hit sales and consumption targets</li>\n<li>Keeping your team of SAs ahead of the technical curve</li>\n</ul>\n<p>To be successful in this role, you will need to have a strong technical background, excellent leadership skills, and the ability to communicate effectively with both technical and non-technical stakeholders.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a7186358-298","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8250195002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,100-$264,175 USD","x-skills-required":["data warehousing","big data","machine learning","solution architecture","technical leadership","customer success","sales leadership"],"x-skills-preferred":["Databricks","Apache Spark","Delta Lake","MLflow","Lakehouse"],"datePosted":"2026-04-24T12:08:15.332Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data warehousing, big data, machine learning, solution architecture, technical leadership, customer success, sales leadership, Databricks, Apache Spark, Delta Lake, MLflow, Lakehouse","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192100,"maxValue":264175,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3b29aed1-6ae"},"title":"Sr. Manager, AI Forward Deployed Engineering (FDE)","description":"<p>We are looking for a world-class leader to lead and grow our AI FDE team. As a Sr. Manager, AI Forward Deployed Engineering (FDE), you will lead customers on their AI/ML transformation with Databricks, push the boundaries of our product, recruit and develop top data scientists/machine learning engineers, and manage a portfolio of key accounts.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Lead and scale a world-class AI/ML professional services team, including hiring, mentoring, and building a team structure to support long-term growth and execution at scale.</li>\n<li>Develop and expand executive relationships with key customers and partners, acting as a trusted advisor during complex technical engagements and AI transformations.</li>\n<li>Align with Field Engineering and Sales Leaders to define joint strategies for strategic accounts and ensure strong delivery coordination across functions.</li>\n<li>Lead strategic AI PS initiatives, practice development, and standardized delivery processes; design scalable engagement models and reusable solutions for repeatability across the global team.</li>\n<li>Shape cross-functional collaboration by influencing Product, R&amp;D, and GTM,ensuring voice-of-customer insights and delivery learnings help inform the product roadmap and GTM strategy.</li>\n<li>Own OKRs for AI-services led accounts, revenue, utilization, and public references.</li>\n<li>Represent Databricks as a thought leader in AI/ML.</li>\n</ul>\n<p>The ideal candidate will have extensive experience managing, hiring, and building a team of high-performing data scientists/ML engineers and leaders, with a track record of scaling organizations through developing scalable processes and cultivating leaders.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3b29aed1-6ae","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8515642002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$211,800-$291,300 USD","x-skills-required":["Machine Learning","GenAI","Data Science","Cloud Computing","Leadership","Team Management","Strategic Planning","Cross-Functional Collaboration"],"x-skills-preferred":["Databricks","Apache Spark","Delta Lake","MLflow"],"datePosted":"2026-04-24T12:07:58.029Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, GenAI, Data Science, Cloud Computing, Leadership, Team Management, Strategic Planning, Cross-Functional Collaboration, Databricks, Apache Spark, Delta Lake, MLflow","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":211800,"maxValue":291300,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_272750a8-710"},"title":"Consultant","description":"<p>As a Consultant at MHP, you will operate infrastructure in AWS using Terraform, create technical concepts for new features and enhancements within a Scrum Team, develop and maintain scalable Java Spring Boot microservices, and work with AWS and Kubernetes.</p>\n<p>You will have expertise in backend programming using Java and Spring Boot, experience with AWS, including services like S3, EC2, and Lambda, and experience with Terraform for creating and managing AWS infrastructure.</p>\n<p>You will also have experience with tools such as IntelliJ and REST tools (Postman or similar), proficiency in working with Kubernetes for microservices, advanced-level AWS certification, experience with Apache Kafka event streaming, experience working with MongoDB database, and experience working with GitLab CI/CD pipelines.</p>\n<p>You will start by arrangement, work full-time (40h) with 27 vacation days, and have an unlimited employment contract. You will need a valid work permit and be fluent in written and spoken English.</p>\n<p>At MHP, you will continuously grow with your projects and objectives in an innovative and supportive environment. You will be part of a strong team spirit, where every win, big or small, belongs to all of us. You will welcome curiosity, creativity, and unconventional thinking patterns, and recognize the importance of healthy, tight-knit communities and sustainable environmental changes.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_272750a8-710","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"http://www.mhp.com/","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18226","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Spring Boot","AWS","Terraform","Kubernetes","IntelliJ","REST tools","Apache Kafka","MongoDB","GitLab CI/CD pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:25:42.569Z","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Consulting","skills":"Java, Spring Boot, AWS, Terraform, Kubernetes, IntelliJ, REST tools, Apache Kafka, MongoDB, GitLab CI/CD pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b33cbd91-bc9"},"title":"Systematic Production Support Engineer","description":"<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>\n<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>\n<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>\n<li>Implementing automated systems and processes focused on trading and operations</li>\n<li>Streamlining development and deployment processes</li>\n</ul>\n<p>Technical qualifications include:</p>\n<ul>\n<li>5+ years of development experience in Python</li>\n<li>Experience working in a Linux/Unix environment</li>\n<li>Experience working with PostgreSQL or other relational databases</li>\n</ul>\n<p>Preferred skills and experience include:</p>\n<ul>\n<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>\n<li>Experience operating and monitoring low-latency trading environments</li>\n<li>Familiarity with quantitative finance and electronic trading concepts</li>\n<li>Familiarity with financial data</li>\n<li>Broad understanding of equities, futures, FX, or other financial instruments</li>\n<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>\n<li>Experience with Apache/Confluent Kafka</li>\n<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>\n<li>Experience with containerization and orchestration technologies</li>\n<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>\n<li>Contributions to open-source projects</li>\n</ul>\n<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b33cbd91-bc9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954716155","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Linux/Unix","PostgreSQL","NLP","supervised/non-supervised learning","Generative AI models","low-latency trading environments","quantitative finance","electronic trading concepts","financial data","equities","futures","FX","distributed systems","backend development","C/C++","Java","Scala","Go","C#","Apache/Confluent Kafka","SDLC pipelines","containerization","orchestration technologies","AWS","GCP","Azure"],"x-skills-preferred":["Understanding of NLP, supervised/non-supervised learning, and Generative AI models","Experience operating and monitoring low-latency trading environments","Familiarity with quantitative finance and electronic trading concepts","Familiarity with financial data","Broad understanding of equities, futures, FX, or other financial instruments","Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#","Experience with Apache/Confluent Kafka","Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)","Experience with containerization and orchestration technologies","Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure","Contributions to open-source projects"],"datePosted":"2026-04-18T22:14:36.583Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0987988a-011"},"title":"Feature Framework Engineer","description":"<p>The Systematic Platform Execution &amp; Exchange Data (SPEED) Team is at the core of Millennium&#39;s Equities, Quant Strategies, and Shared Services Technology organisation.</p>\n<p>We are looking for a C++ engineer to design and build high-performance, low-latency applications that process large volumes of real-time data.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain high-performance C++ services handling high message rates and low-latency workloads.</li>\n</ul>\n<ul>\n<li>Optimise existing components for latency, throughput, and CPU/memory efficiency.</li>\n</ul>\n<ul>\n<li>Develop and tune networking, messaging, and I/O layers to handle large data volumes reliably.</li>\n</ul>\n<ul>\n<li>Profile and debug performance issues at application, OS, and network levels.</li>\n</ul>\n<ul>\n<li>Collaborate with quantitative, trading, and infrastructure teams to translate requirements into robust technical solutions.</li>\n</ul>\n<ul>\n<li>Write clean, production-quality code with appropriate tests and documentation.</li>\n</ul>\n<ul>\n<li>Participate in code reviews, design discussions, and continuous improvement of engineering practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>Strong proficiency in modern C++ (C++17/20 or later).</li>\n</ul>\n<ul>\n<li>5+ years of experience.</li>\n</ul>\n<ul>\n<li>Analytics Focus: KDB / Q Experience for large market data, modern data analysis with pytorch, pandas and modern tooling including Apache arrow.</li>\n</ul>\n<ul>\n<li>Familiar with basics statistics as applied to financial research.</li>\n</ul>\n<ul>\n<li>Proven experience building performance-critical, real-time, or low-latency systems.</li>\n</ul>\n<ul>\n<li>Strong knowledge of computer science fundamentals: data structures, algorithms, memory management, and optimisation.</li>\n</ul>\n<ul>\n<li>Experience using profiling, benchmarking, and performance analysis tools.</li>\n</ul>\n<ul>\n<li>Proficiency with version control (Git) and standard build systems.</li>\n</ul>\n<ul>\n<li>Excellent problem-solving skills and attention to detail.</li>\n</ul>\n<ul>\n<li>Strong interpersonal skills with a proven ability to navigate large organisations.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with kernel bypass or user-space networking technologies.</li>\n</ul>\n<ul>\n<li>Familiarity with AI productivity enhancing coding tools.</li>\n</ul>\n<ul>\n<li>Experience in financial markets, market data distribution, order routing, or exchange connectivity.</li>\n</ul>\n<ul>\n<li>Experience with monitoring/telemetry for high-performance systems.</li>\n</ul>\n<ul>\n<li>Familiarity with scripting languages for tooling and automation.</li>\n</ul>\n<ul>\n<li>AI: Familiarity with AI productivity enhancing coding tools.</li>\n</ul>\n<p>Personal Attributes:</p>\n<ul>\n<li>Obsessed with performance, measurement, and data-driven optimisation.</li>\n</ul>\n<ul>\n<li>Comfortable owning features end-to-end and operating in a production environment.</li>\n</ul>\n<ul>\n<li>Clear communicator who can work closely with both technical and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Proactive, self-directed, and able to thrive in a highly iterative environment.</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0987988a-011","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955682418","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["modern C++","KDB / Q","pytorch","pandas","Apache arrow","data structures","algorithms","memory management","optimisation","profiling","benchmarking","performance analysis tools","version control","standard build systems"],"x-skills-preferred":["kernel bypass","user-space networking technologies","AI productivity enhancing coding tools","financial markets","market data distribution","order routing","exchange connectivity","monitoring/telemetry","scripting languages"],"datePosted":"2026-04-18T22:14:03.382Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"modern C++, KDB / Q, pytorch, pandas, Apache arrow, data structures, algorithms, memory management, optimisation, profiling, benchmarking, performance analysis tools, version control, standard build systems, kernel bypass, user-space networking technologies, AI productivity enhancing coding tools, financial markets, market data distribution, order routing, exchange connectivity, monitoring/telemetry, scripting languages","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32932504-2b5"},"title":"Systematic Production Support Engineer","description":"<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>\n<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>\n<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>\n<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>\n<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>\n<li>Implementation of automated systems and processes focused on trading and operations.</li>\n<li>Streamlining development and deployment processes.</li>\n<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>\n</ul>\n<p>Technical Qualification:</p>\n<ul>\n<li>5+ years of development experience in Python.</li>\n<li>Experience working in a Linux / Unix environment.</li>\n<li>Experience working with PostgreSQL or other relational databases.</li>\n<li>Ability to understand and discuss requirements from portfolio managers.</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>\n<li>Experience operating and monitoring low-latency trading environments.</li>\n<li>Familiarity with quantitative finance and electronic trading concepts.</li>\n<li>Familiarity with financial data.</li>\n<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>\n<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>\n<li>Experience with Apache / Confluent Kafka.</li>\n<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>\n<li>Experience with containerization and orchestration technologies.</li>\n<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>\n<li>Contributions to open-source projects.</li>\n</ul>\n<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32932504-2b5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954627501","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$100,000 to $175,000","x-skills-required":["Python","Linux / Unix","PostgreSQL","NLP","supervised/non-supervised learning","Generative AI models"],"x-skills-preferred":["Apache / Confluent Kafka","C/C++","Java","Scala","Go","C#","containerization","orchestration technologies","AWS","GCP","Azure"],"datePosted":"2026-04-18T22:13:42.254Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America · Old Greenwich, Connecticut, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100000,"maxValue":175000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_04c1ff49-2d1"},"title":"Data Platform Solutions Architect (Professional Services)","description":"<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Data Platform Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Extensive experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 10% of the time</li>\n</ul>\n<p>[Preferred] Databricks Certification but not essential</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_04c1ff49-2d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8396801002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","technical project delivery","documentation and white-boarding skills"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:58:52.546Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, technical project delivery, documentation and white-boarding skills, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e58b08f7-c31"},"title":"Senior Data Engineer","description":"<p>As a Senior Data Engineer on the Analytics Team, you will collaborate with stakeholders across the company to design, build and implement data pipelines and models that enable our next generation of technology to be deployed around the world. You will have a hand in helping shape the data platform vision at Anduril.</p>\n<p>We&#39;re looking for software and data engineers who are seeking high impact collaborative roles focused on driving operational execution. Ideally you are looking to learn what it takes to build the next generation of defence technology.</p>\n<p>Your responsibilities will include leading the design and roadmap for our data platform, partnering with operations, product, and engineering to advocate best practices and build supporting systems and infrastructure for the various data needs, owning the ingest and egress frameworks for data pipelines that stitch together various data sources in order to produce valuable data products that drive the business, and managing a large user base and providing true data self-service at scale.</p>\n<p>We use Palantir Foundry as our central hub for data-driven applications, visualizations and large-scale data analysis across the Anduril org. We also use SQLMesh for data transformations, Athena for querying data, Apache Iceberg as our table format, and Flyte for orchestration.</p>\n<p>Required qualifications include 5+ years of experience in a data engineering role building products, ideally in a fast-paced environment, good foundations in Python or another language, experience with Spark, PySpark, SQL and dbt, experience with Enterprise Data Systems like Palantir Foundry, and experience with or interest in learning how to develop data services and data products.</p>\n<p>The salary range for this role is $166,000-$220,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e58b08f7-c31","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/4587312007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$220,000 USD","x-skills-required":["Python","Spark","PySpark","SQL","dbt","Palantir Foundry","SQLMesh","Athena","Apache Iceberg","Flyte"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:44.003Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Spark, PySpark, SQL, dbt, Palantir Foundry, SQLMesh, Athena, Apache Iceberg, Flyte","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5b244f27-9fd"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases. You will work with engagement managers to scope variety of professional services work with input from the customer.</p>\n<p>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications. Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</p>\n<p>Provide an escalated level of support for customer operational issues. You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</p>\n<p>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</p>\n<p>The ideal candidate will have 6+ years experience in data engineering, data platforms &amp; analytics, comfortable writing code in either Python or Scala, working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one, deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals, familiarity with CI/CD for production deployments, working knowledge of MLOps, design and deployment of performant end-to-end data architectures, experience with technical project delivery - managing scope and timelines, documentation and white-boarding skills, experience working with clients and managing conflicts, build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</p>\n<p>Travel to customers 20% of the time.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5b244f27-9fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461258002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:34.588Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Raleigh, North Carolina"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e5fa8591-cb8"},"title":"Solutions Architect: Data & AI","description":"<p>As a Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your clients through evaluating and adopting Databricks including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>Together with your Account Executive, you will form successful relationships with clients throughout your assigned territory to provide technical and business value</p>\n<p>What we look for:</p>\n<ul>\n<li>Strong consulting / customer facing experience, working with external clients across a variety of industry markets</li>\n<li>Core strength in either data engineering or data science technologies</li>\n<li>8+ years of experience demonstrating technical concepts, including demos, presenting and white-boarding</li>\n<li>8+ years of experience designing architectures within a public cloud (AWS, Azure or GCP)</li>\n<li>6+ years of experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Coding experience in Python, R, Java, Apache Spark or Scala</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e5fa8591-cb8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8353757002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","R","Java","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:24.843Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, R, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a38ec886-62e"},"title":"AI Engineer - FDE (Forward Deployed Engineer)","description":"<p>Mission</p>\n<p>The AI Forward Deployed Engineering (AI FDE) team is a highly specialized customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications.</p>\n<p>We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team.</p>\n<p>This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. This role can be remote.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems</li>\n</ul>\n<ul>\n<li>Own production rollouts of consumer and internally facing GenAI applications</li>\n</ul>\n<ul>\n<li>Serve as a trusted technical advisor to customers across a variety of domains</li>\n</ul>\n<ul>\n<li>Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally</li>\n</ul>\n<ul>\n<li>Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy</li>\n</ul>\n<ul>\n<li>Minimum of 5+ years of relevant experience as a Data Scientist preferably working in a consulting role</li>\n</ul>\n<ul>\n<li>Expertise in deploying production-grade GenAI applications, including evaluation and optimizations</li>\n</ul>\n<ul>\n<li>Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc.</li>\n</ul>\n<ul>\n<li>Experience building production-grade machine learning deployments on AWS, Azure, or GCP</li>\n</ul>\n<ul>\n<li>Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience</li>\n</ul>\n<ul>\n<li>Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike</li>\n</ul>\n<ul>\n<li>Passion for collaboration, life-long learning, and driving business value through AI</li>\n</ul>\n<ul>\n<li>Preferred experience using the Databricks Intelligence Platform and Apache Spark to process large-scale distributed datasets</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a38ec886-62e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8099751002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["GenAI","HuggingFace","LangChain","DSPy","pandas","scikit-learn","PyTorch","AWS","Azure","GCP","Apache Spark"],"x-skills-preferred":["Databricks Intelligence Platform","Mosaic AI research"],"datePosted":"2026-04-18T15:58:10.707Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GenAI, HuggingFace, LangChain, DSPy, pandas, scikit-learn, PyTorch, AWS, Azure, GCP, Apache Spark, Databricks Intelligence Platform, Mosaic AI research"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b2f6f807-fc6"},"title":"Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>We are looking for a software engineer to join our team as a founding member of our Belgrade site. As a software engineer, you will be involved in the entire development cycle and exemplify all core Databricks values.</p>\n<p>The responsibilities you will have:</p>\n<ul>\n<li>Drive requirements clarity and design decisions for ambiguous problems</li>\n<li>Produce technical design documents and project plans</li>\n<li>Develop new features</li>\n<li>Mentor more junior engineers</li>\n<li>Test and rollout to production, monitoring</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS in Computer Science or equivalent practical experience in databases or distributed systems</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Motivated by delivering customer value and impact</li>\n<li>3+ years of production level experience in either Java, Scala or C++</li>\n<li>Solid foundation in algorithms and data structures and their real-world use cases</li>\n<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop)</li>\n</ul>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b2f6f807-fc6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8012691002","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:53.371Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgrade, Serbia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_10290548-1ea"},"title":"Solutions Architect - Public Sector (LEAPS)","description":"<p>As a Solutions Architect - Public Sector at Databricks, you will be part of the Field Engineering team responsible for leading the growth of the Databricks Unified Analytics Platform. The role involves working with customers, teammates, the product team, and post-sales teams to identify use cases for Databricks, develop architectures and solutions using our platform, and guide customers through implementation to accomplish value.</p>\n<p>Key responsibilities include: Partnering with the sales team to help customers understand how Databricks can help solve their business problems Providing technical leadership for customers to evaluate and adopt Databricks Consulting on big data architecture, implementing proof of concepts for strategic customer projects, data science and machine learning projects, and validating integrations with cloud services and other 3rd party applications Building and presenting reference architectures, how-tos, and demo applications for customers Becoming an expert in, and promoting Databricks-inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars Traveling to customers in your region</p>\n<p>We look for candidates with 5+ years of experience in a customer-facing pre-sales, technical architecture, or consulting role, with expertise in designing and architecting distributed data systems. Experience with public cloud providers such as AWS, Azure, or GCP, data engineering technologies (e.g., Spark, Hadoop, Kafka), and data warehousing (e.g., SQL, OLTP/OLAP/DSS) is also required.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_10290548-1ea","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8320126002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["Apache Spark","MLflow","Delta Lake","Python","Scala","Java","SQL","R","AWS","Azure","GCP","Data Engineering","Data Warehousing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:53.145Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Maryland; Virginia; Washington, D.C."}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, MLflow, Delta Lake, Python, Scala, Java, SQL, R, AWS, Azure, GCP, Data Engineering, Data Warehousing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7af76f0d-cb6"},"title":"Geo Hunter Account Executive","description":"<p>As a Geo Hunter Account Executive on Databricks&#39; LATAM team, you will be responsible for selling Databricks&#39; enterprise cloud data platform powered by Apache Spark to customers in Brazil. You will have the opportunity to close new accounts, increase consumption and create new workloads in existing accounts, and exceed activity, pipeline, and revenue targets.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Presenting a territory plan within the first 90 days</li>\n<li>Meeting with CIOs, IT executives, LOB executives, program managers, and other important partners</li>\n<li>Closing both new accounts and existing accounts</li>\n<li>Identifying and closing quick, small wins while managing longer, complex sales cycles</li>\n<li>Exceeding activity, pipeline, and revenue targets</li>\n<li>Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Native Portuguese speaker with strong English language skills</li>\n<li>Previous experience in field sales within big data, cloud, and SaaS sales</li>\n<li>Prior customer relationships with CIOs, program managers, and essential decision makers</li>\n<li>Ability to simply articulate intricate cloud technologies</li>\n<li>3+ years of relevant full-cycle sales experience exceeding quotas</li>\n<li>Understanding of Apache Spark and big data preferable</li>\n</ul>\n<p>Benefits include accelerators above 100% quota attainment and a commitment to diversity and inclusion.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7af76f0d-cb6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7675324002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["native Portuguese speaker","field sales experience","big data","cloud","SaaS sales","Apache Spark","Salesforce"],"x-skills-preferred":["communication skills","problem-solving skills"],"datePosted":"2026-04-18T15:57:48.833Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sao Paulo, Brazil"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"native Portuguese speaker, field sales experience, big data, cloud, SaaS sales, Apache Spark, Salesforce, communication skills, problem-solving skills"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ba129b2-e3a"},"title":"Solutions Architect (Hong-Kong)","description":"<p>We are seeking a Solutions Architect to join our Field Engineering team in Singapore. As a Solutions Architect, you will be responsible for demonstrating how our Data Intelligence Platform can help customers solve their complex data challenges. You will work with a collaborative, customer-focused team who values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Form successful relationships with clients in Hong Kong to provide technical and business value in collaboration with an Account Executive and a Senior Solutions Architect.</li>\n<li>Gain excitement from clients about Databricks through hands-on evaluation and Apache Spark programming, integrating with the wider cloud ecosystem and 3rd party applications.</li>\n<li>Contribute to building the Databricks technical community through engagement at workshops, seminars, and meet-ups.</li>\n<li>Become a Big Data Analytics advisor on aspects of architecture and design.</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications.</li>\n<li>Develop both technically and in the pre-sales aspect with the goal of becoming an independently operating Solutions Architect.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Familiarity working with clients, creating a narrative, answering customer questions, aligning the agenda with important interests, and achieving tangible outcomes.</li>\n<li>Ability to independently deliver a technical proposition, identify customers&#39; pain-points, and explain important areas for business value to develop a trusted advisor skillset.</li>\n<li>Code in a core programming language such as Python, Java, or Scala.</li>\n<li>Knowledgeable in a core Big Data Analytics domain with some exposure to advanced proofs-of-concept and an understanding of a major public cloud platform.</li>\n<li>Experience diving deeper into solution architecture and design.</li>\n<li>Proficiency in Cantonese is required as this role serves clients based in Hong Kong and involves direct customer communications in Cantonese</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ba129b2-e3a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437010002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Python","Java","Scala","Big Data Analytics","Cloud Computing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:32.290Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Python, Java, Scala, Big Data Analytics, Cloud Computing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fdc6f0f9-900"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fdc6f0f9-900","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461168002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","distributed computing","CI/CD","MLOps","performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:29.214Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Los Angeles, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, distributed computing, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cb18189c-d78"},"title":"Solutions Architect (Pre-sales) - Kansai Region","description":"<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud) – Kansai Region, your mission will be to drive successful technical evaluations and solution designs for some of our focus customers in the Kansai region (Osaka/Kyoto) for Databricks Japan.</p>\n<p>You are passionate about data and AI, love getting hands-on with technology, and enjoy communicating its value to both technical and non-technical stakeholders. Partnering closely with Account Executives, you will lead the technical discovery, architecture design, and proof-of-concept phases, and act as a trusted advisor to our customers on their data and AI strategy.</p>\n<p>You will help customers realize tangible, data-driven outcomes on the Databricks Lakehouse Platform by guiding data and AI teams to design, build, and operationalize solutions within their enterprise ecosystem.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your prospects through evaluating and adopting Databricks</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars, and meet-ups</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>\n<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>\n<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>\n<li>Experience designing and implementing architectures within public clouds (AWS, Azure, or GCP)</li>\n<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java, and R is also desirable</li>\n<li>Experience working with Enterprise Accounts</li>\n<li>Written and verbal fluency in Japanese</li>\n</ul>\n<p>Benefits:</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cb18189c-d78","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437028002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","Scala","Java","R","Public Cloud","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:24.678Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R, Public Cloud, AWS, Azure, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd67fe82-1c8"},"title":"Solutions Architect : Data & AI","description":"<p>As a Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your clients through evaluating and adopting Databricks including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>Together with your Account Executive, you will form successful relationships with clients throughout your assigned territory to provide technical and business value</p>\n<p>What we look for:</p>\n<ul>\n<li>Strong consulting / customer facing experience, working with external clients across a variety of industry markets</li>\n<li>Core strength in either data engineering or data science technologies</li>\n<li>8+ years of experience demonstrating technical concepts, including demos, presenting and white-boarding</li>\n<li>8+ years of experience designing architectures within a public cloud (AWS, Azure or GCP)</li>\n<li>6+ years of experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Coding experience in Python, R, Java, Apache Spark or Scala</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd67fe82-1c8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8346277002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data technologies","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","R","Java","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:18.281Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data technologies, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, R, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_03224784-9c2"},"title":"Senior Data Engineering Manager","description":"<p>Job Title: Senior Data Engineering Manager</p>\n<p>Location: Dublin, Ireland</p>\n<p>Department: R&amp;D</p>\n<p>Job Description:</p>\n<p>Intercom is seeking a Senior Data Engineering Manager to lead the design and evolution of the core infrastructure that powers our entire data ecosystem. As a leader, you will partner with product and business teams to drive key data initiatives and ensure the success of our data engineering team.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Next-Gen Platform Evolution: Partner with product and business teams to design and implement the next generation of our data stack, ensuring it can meet the demands of advanced analytics and AI applications.</li>\n</ul>\n<ul>\n<li>Enablement Through Tooling: Partner closely with Analytics Engineers, Analysts, and Data Scientists to build self-service tooling and infrastructure that enables them to move fast and deploy safely.</li>\n</ul>\n<ul>\n<li>Data Quality Guardianship: Implement advanced monitoring systems to proactively detect, surface, and resolve data quality issues across our high-throughput environment.</li>\n</ul>\n<ul>\n<li>Driving Automation: Develop automation and tooling that streamlines the creation and discovery of high-quality analytics data, making the entire data lifecycle more efficient.</li>\n</ul>\n<p>Strategic Impact You&#39;ll Drive:</p>\n<ul>\n<li>GTM Data Platform Strategy: Build the data acquisition strategy that will enable us to build the next generation of business-focused internal software.</li>\n</ul>\n<ul>\n<li>Conversational BI Strategy: Lead the charge to shift away from complex, technical reporting toward natural language interaction to make data truly democratized and accessible.</li>\n</ul>\n<ul>\n<li>Platform &amp; Warehousing Strategy: Lead the architectural- and cost review and revamp of our core data infrastructure to ensure it can scale exponentially for future growth and advanced use cases.</li>\n</ul>\n<p>Recent Wins You&#39;ll Build Upon:</p>\n<ul>\n<li>AI-assisted Local Analytics Development Environment for Airflow and DBT.</li>\n</ul>\n<ul>\n<li>Data-rich AI apps containerized on Snowflake SPCS.</li>\n</ul>\n<ul>\n<li>A new, modern data catalog solution.</li>\n</ul>\n<ul>\n<li>Migrating critical MySQL ingestion pipelines from Aurora to PlanetScale.</li>\n</ul>\n<p>Who You Are:</p>\n<ul>\n<li>A leader, a builder, and a problem-solver who thrives on solving real-world business problems.</li>\n</ul>\n<ul>\n<li>7+ years of experience in the data space, leading teams of 6+ engineers.</li>\n</ul>\n<ul>\n<li>Stakeholder focus: ability to communicate complex technical solutions to a business-focused audience and vice versa.</li>\n</ul>\n<ul>\n<li>Technical depth: not afraid to get hands dirty and write code when needed.</li>\n</ul>\n<ul>\n<li>A leader and mentor: naturally recognizes opportunities to step back and mentor others.</li>\n</ul>\n<p>Bonus Points (Our Modern Stack Knowledge):</p>\n<ul>\n<li>Airflow at scale: extensive experience working with Apache Airflow, especially the nuances of operating it reliably in a high-volume environment.</li>\n</ul>\n<ul>\n<li>Modern data stack fluency: familiarity with tools like Snowflake and DBT.</li>\n</ul>\n<ul>\n<li>Future-focused: keeps a keen eye on industry trends and emerging technologies.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive salary and equity in a fast-growing start-up.</li>\n</ul>\n<ul>\n<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen.</li>\n</ul>\n<ul>\n<li>Regular compensation reviews - we reward great work!</li>\n</ul>\n<ul>\n<li>Pension scheme &amp; match up to 4%.</li>\n</ul>\n<ul>\n<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents.</li>\n</ul>\n<ul>\n<li>Open vacation policy and flexible holidays so you can take time off when you need it.</li>\n</ul>\n<ul>\n<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones.</li>\n</ul>\n<ul>\n<li>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too.</li>\n</ul>\n<ul>\n<li>MacBooks are our standard, but we also offer Windows for certain roles when needed.</li>\n</ul>\n<p>Policies:</p>\n<ul>\n<li>Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home.</li>\n</ul>\n<ul>\n<li>We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_03224784-9c2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intercom","sameAs":"https://www.intercom.com/","logo":"https://logos.yubhub.co/intercom.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/intercom/jobs/7574762","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Airflow","Apache Airflow","DBT","Snowflake","Data Engineering","Data Science","Analytics","Data Management","Data Quality","Automation","Cloud Computing","Data Warehousing","Big Data","Machine Learning","Artificial Intelligence"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:06.635Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Airflow, Apache Airflow, DBT, Snowflake, Data Engineering, Data Science, Analytics, Data Management, Data Quality, Automation, Cloud Computing, Data Warehousing, Big Data, Machine Learning, Artificial Intelligence"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0fb2e339-447"},"title":"Enterprise Hunter Account Executive (FSI - North)","description":"<p>As an Enterprise Account Executive in Databricks, you will be responsible for selling the company&#39;s enterprise cloud data platform powered by Apache Spark to financial services institutions in India. Your goal will be to close new accounts while maintaining existing ones, and to exceed activity, pipeline, and revenue targets.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Presenting a territory plan within the first 90 days</li>\n<li>Meeting with CIOs, IT executives, LOB executives, program managers, and other important partners</li>\n<li>Closing both new accounts and existing accounts</li>\n<li>Identifying and closing quick, small wins while managing longer, complex sales cycles</li>\n<li>Exceeding activity, pipeline, and revenue targets</li>\n<li>Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>\n</ul>\n<p>To succeed in this role, you will need to have 7+ years of experience in enterprise sales, with a proven track record of exceeding quotas and closing new accounts. You should also have a strong understanding of cloud technologies and be able to articulate intricate concepts simply.</p>\n<p>In addition to your technical skills, you will need to be a strong communicator and be able to build relationships with key decision-makers. You should also be comfortable working in a fast-paced environment and be able to adapt to changing priorities.</p>\n<p>If you are a motivated and results-driven sales professional who is looking for a new challenge, we encourage you to apply for this role.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0fb2e339-447","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8438952002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Enterprise sales","Cloud technologies","Apache Spark","Salesforce","Customer relationship building"],"x-skills-preferred":["Big data","Data analytics","Artificial intelligence"],"datePosted":"2026-04-18T15:56:57.783Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Delhi, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Enterprise sales, Cloud technologies, Apache Spark, Salesforce, Customer relationship building, Big data, Data analytics, Artificial intelligence"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f962d3f-14e"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n</ul>\n<ul>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f962d3f-14e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461218002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems","Apache Spark","CI/CD","MLOps","performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:09.899Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dallas, Texas"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5ceb4835-0f1"},"title":"Manager, Professional Services","description":"<p>As a Manager, Professional Services, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers get the most value out of their data.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical big data projects which may include building reference architectures, how-to&#39;s, and production-grade MVPs.</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build, and deployment of industry-leading big data and AI applications.</li>\n<li>Consult on architecture and design; bootstrap or implement strategic customer projects which lead to a customer&#39;s successful understanding, evaluation, and adoption of Databricks.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement-specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role.</li>\n<li>4+ years of people management experience, managing a team of Data Engineers, Data Architects, etc.</li>\n<li>6+ years of experience working on Big Data Architectures independently.</li>\n<li>Experience working across Cloud Platforms (GCP/AWS/Azure).</li>\n<li>Experience working on Databricks platform is a plus.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Willingness to travel for onsite customer engagements within India.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5ceb4835-0f1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8503068002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Cloud Native","Data Lakes","Big Data Technologies","Data Engineering","Data Science","Cloud Technology","People Management","Team Leadership"],"x-skills-preferred":["Databricks","GCP","AWS","Azure","Documentation","White-boarding"],"datePosted":"2026-04-18T15:56:03.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Cloud Native, Data Lakes, Big Data Technologies, Data Engineering, Data Science, Cloud Technology, People Management, Team Leadership, Databricks, GCP, AWS, Azure, Documentation, White-boarding"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_023e0d6c-5a8"},"title":"Geo Core Account Executive - Oil, Gas & Energy","description":"<p>As an Enterprise Account Executive on our Oil, Gas and Energy enterprise sales team, you will be responsible for selling Databricks&#39; enterprise cloud data platform powered by Apache Spark to large-scale industrial clients.</p>\n<p>You will present a territory plan within the first 90 days, meet with CIOs, IT executives, LOB executives, Program Managers, and other important partners, and close both new accounts and existing accounts.</p>\n<p>To succeed in this role, you will need to have previously worked in an early-stage company and have experience in field sales within big data, Cloud, and SaaS sales.</p>\n<p>You will also need to have prior customer relationships with CIOs, program managers, and essential decision-makers, and be able to simply articulate intricate cloud technologies.</p>\n<p>The pay range for this role is $220,100-$302,600 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_023e0d6c-5a8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8439679002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$220,100-$302,600 USD","x-skills-required":["Enterprise sales","Cloud sales","Big data sales","SaaS sales","Apache Spark","Lakehouse","Delta Lake","MLflow"],"x-skills-preferred":["Prior customer relationships with CIOs","Program managers","Essential decision-makers"],"datePosted":"2026-04-18T15:55:55.238Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Texas"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Energy","skills":"Enterprise sales, Cloud sales, Big data sales, SaaS sales, Apache Spark, Lakehouse, Delta Lake, MLflow, Prior customer relationships with CIOs, Program managers, Essential decision-makers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":220100,"maxValue":302600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0036f074-845"},"title":"Resident Solutions Architect - Financial Services","description":"<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>9+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Capable of design and deployment of highly performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n<li>Travel to customers up to 20% of the time</li>\n</ul>\n<p>Nice to have: Databricks Certification</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0036f074-845","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8456966002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","design and deployment of highly performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:41.870Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Boston, Massachusetts"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8317ba42-502"},"title":"Senior Technical Solutions Engineer (Platform)","description":"<p>We are seeking a highly skilled Frontline Senior Technical Solutions Engineer with over 7+ years of experience to join our Platform Support team.</p>\n<p>This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>\n<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>\n</ul>\n<ul>\n<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>\n</ul>\n<ul>\n<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>\n</ul>\n<ul>\n<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>\n</ul>\n<ul>\n<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>\n</ul>\n<ul>\n<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>\n</ul>\n<ul>\n<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>\n</ul>\n<ul>\n<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>\n</ul>\n<ul>\n<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Experience: Minimum of 7+ years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>\n</ul>\n<ul>\n<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimization.</li>\n</ul>\n<ul>\n<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>\n</ul>\n<ul>\n<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>\n</ul>\n<ul>\n<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>\n</ul>\n<ul>\n<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>\n</ul>\n<ul>\n<li>Familiarity with automation tools and CI/CD pipelines.</li>\n</ul>\n<ul>\n<li>Understanding of data governance and compliance requirements.</li>\n</ul>\n<p>Why Join Us?</p>\n<ul>\n<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>\n</ul>\n<ul>\n<li>Career Growth: Opportunities for professional development and career advancement.</li>\n</ul>\n<ul>\n<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>\n</ul>\n<p>PLEASE NOTE: THE ROLE INVOLVES WORKING IN THE EMEA TIMEZONE</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8317ba42-502","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8041698002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Databricks architecture","Apache Spark","AWS","Azure","GCP","Unix/Linux administration","DevOps practices","log analysis and monitoring tools"],"x-skills-preferred":["Hadoop","Kafka","NoSQL databases","automation tools","CI/CD pipelines","data governance and compliance requirements"],"datePosted":"2026-04-18T15:55:32.901Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a5be03ca-ea6"},"title":"Named Core Account Executive - Industrial","description":"<p>As a Named Core Account Executive - Industrial at Databricks, you will be responsible for managing a small set of clients in our Industrial subvertical. You will come with an informed point of view on Big Data, Advanced Analytics, and AI which will help to guide your successful execution strategy and allow you to provide genuine value to the client.</p>\n<p>Your responsibilities will include building relationships with CIOs, IT executives, LOB executives, Program Managers, and other important partners. You will drive value-based growth within the account, expand the Databricks footprint into new business units and use cases, and exceed activity, pipeline, and revenue targets.</p>\n<p>To succeed in this role, you will need to have previously excelled in an early-stage company, have previous field sales experience within big data, Cloud, SaaS, and a consumption selling motion, and have prior customer relationships with CIOs, program managers, and essential decision makers at local accounts.</p>\n<p>The pay range for this role is $272,000-$374,000 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a5be03ca-ea6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8439683002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$272,000-$374,000 USD","x-skills-required":["Big Data","Advanced Analytics","AI","Cloud","SaaS","Sales"],"x-skills-preferred":["Apache Spark","Delta Lake","MLflow"],"datePosted":"2026-04-18T15:55:21.586Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Big Data, Advanced Analytics, AI, Cloud, SaaS, Sales, Apache Spark, Delta Lake, MLflow","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":272000,"maxValue":374000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0a7cad02-cd5"},"title":"Resident Solutions Architect - Manufacturing","description":"<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues</li>\n</ul>\n<ul>\n<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Ability to travel up to 30% when needed</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0a7cad02-cd5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8494155002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:20.115Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Philadelphia, Pennsylvania"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1f2f48ad-46d"},"title":"Senior Analytics Engineer","description":"<p>We&#39;re looking for a dedicated Analytics Engineer to join the AI Group to help us with data platform development, cross-functional collaboration, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, and strategic influence.</p>\n<p>As an Analytics Engineer, you will design, build, and manage scalable data pipelines and ETL processes to support a robust, analytics-ready data platform. You will partner with AI analysts, ML scientists, engineers, and business teams to understand data needs and ensure accurate, reliable, and ergonomic data solutions. You will lead initiatives in data model development, data quality ownership, warehouse management, and production support for critical workflows. You will conduct data analysis and build custom models to support strategic business decisions and performance measurement. You will streamline data collection and reporting processes to reduce manual effort and improve efficiency. You will create scalable solutions like unified data pipelines and access control systems to meet evolving organisational needs. You will work with partner teams to align data collection with long-term analytics and feature development goals.</p>\n<p>We&#39;re looking for someone who writes advanced SQL with a preference for well-architected data models, optimized query performance, and clearly documented code. You should be familiar with the modern data stack, including dbt and Snowflake. You should have a growth mindset and eagerness to learn. You should exhibit great judgment and sharp business and product instincts that allow you to differentiate essential versus nice-to-have and to make good choices about trade-offs. You should practice excellent communication skills, and you should tailor explanations of technical concepts to a variety of audiences.</p>\n<p>Nice to have: exposure to Apache Airflow or other DAG frameworks, worked in Tableau, Looker, or similar visualization/business intelligence platform, experience with operational tools and business systems like Google Analytics, Marketo, Salesforce, Segment, or Stripe, familiarity with Python.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1f2f48ad-46d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intercom","sameAs":"https://www.intercom.com/","logo":"https://logos.yubhub.co/intercom.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/intercom/jobs/7807847","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["advanced SQL","dbt","Snowflake","data pipeline development","ETL process management","data strategy & governance","advanced analytics & insights","automation & optimization","innovation in data infrastructure","strategic influence"],"x-skills-preferred":["Apache Airflow","Tableau","Looker","Google Analytics","Marketo","Salesforce","Segment","Stripe","Python"],"datePosted":"2026-04-18T15:55:10.503Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"advanced SQL, dbt, Snowflake, data pipeline development, ETL process management, data strategy & governance, advanced analytics & insights, automation & optimization, innovation in data infrastructure, strategic influence, Apache Airflow, Tableau, Looker, Google Analytics, Marketo, Salesforce, Segment, Stripe, Python"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0b29d013-412"},"title":"Senior Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Our customers use deep data insights to improve their business. As a senior software engineer on the Runtime team, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>\n<p>Some example projects include: Apache Spark: Develop the de facto open source standard framework for big data. Data Plane Storage: Provide reliable and high performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Delta Lake: A storage management system that combines the scale and cost-efficiency of data lakes, the performance and reliability of a data warehouse, and the low latency of streaming. Delta Pipelines: It&#39;s difficult to manage even a single data engineering pipeline. The goal of the Delta Pipelines project is to make it simple and possible to orchestrate and operate tens of thousands of data pipelines. Performance Engineering: Build the next generation query optimizer and execution engine that&#39;s fast, tuning free, scalable, and robust.</p>\n<p>We look for: BS (or higher) in Computer Science, related technical field or equivalent practical experience. Comfortable working towards a multi-year vision with incremental deliverables. Motivated by delivering customer value and impact. 5+ years of production level experience in either Java, Scala or C++. Strong foundation in algorithms and data structures and their real-world use cases. Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</p>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Local Pay Range $166,000-$225,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0b29d013-412","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/4513122002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:01.767Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bbbd3f3a-5fe"},"title":"Solutions Architect (Pre-sales) - Digital Native","description":"<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the Digital Native field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our The Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem. You&#39;ll grow as a leader in your field, while finding solutions to our customers&#39; biggest challenges in big data, analytics, data engineering and data science problems.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your prospects through evaluating and adopting Databricks</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>\n<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>\n<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>\n<li>Experience designing and implementing architectures within public clouds (AWS, Azure or GCP)</li>\n<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others.</li>\n<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java and R is also desirable</li>\n<li>Experience working with Enterprise Accounts</li>\n<li>Written and verbal fluency in Japanese and English</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bbbd3f3a-5fe","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437026002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Public Cloud","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra"],"x-skills-preferred":["Python","Scala","Java","R"],"datePosted":"2026-04-18T15:54:50.098Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo, Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Public Cloud, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fc79e6e5-5c0"},"title":"Resident Solutions Architect - Manufacturing","description":"<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues</li>\n</ul>\n<ul>\n<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Ability to travel up to 30% when needed</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fc79e6e5-5c0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8494156002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","Data engineering","Data science","Cloud technology"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:34.838Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, Data engineering, Data science, Cloud technology","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_44ad5a7e-cf5"},"title":"Solutions Architect (Taiwan)","description":"<p>We are seeking a Solutions Architect to join our Field Engineering team in Singapore. As a Solutions Architect, you will be responsible for demonstrating how our Data Intelligence Platform can help customers solve their complex data challenges. You will work with a collaborative, customer-focused team that values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Form successful relationships with clients in Taiwan, providing technical and business value to Databricks customers in collaboration with Account Executives.</li>\n<li>Operate as an expert in big data analytics to excite customers about Databricks. You will develop into a ‘champion’ and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</li>\n<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>\n<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions.</li>\n<li>Develop customer relationships and build internal partnerships with account executives and teams.</li>\n<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Apache Spark.</li>\n<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s).</li>\n<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth.</li>\n<li>Proficiency in Mandarin is required as this role serves clients based in Taiwan and involves direct customer communications in Mandarin</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_44ad5a7e-cf5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8499585002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","Apache Spark","Big Data Analytics","Mandarin"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:23.481Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, Apache Spark, Big Data Analytics, Mandarin"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_19182c1d-b27"},"title":"Solutions Architect - UAE","description":"<p>At Databricks, our core values are at the heart of everything we do; creating a culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone.</p>\n<p>We aim to inspire our customers to make informed decisions that push their business forward. We provide a user-friendly and intuitive platform that makes it easy to turn insights into action and fosters a culture of creativity, experimentation, and continuous improvement.</p>\n<p>As a Solutions Architect in the UAE Pre-Sales team, you will be an essential part of this mission, using your technical expertise to demonstrate how our Data Intelligence Platform can help customers solve their complex data challenges.</p>\n<p>You&#39;ll work with a collaborative, customer-focused team that values innovation and creativity, using your skills to create customised solutions to help our customers achieve their goals and guide their businesses forward.</p>\n<p>Join us in our quest to change how people work with data and make a better world!</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Create impactful and successful relationships with customer accounts in the United Arab Emirates, providing technical and business value to Databricks customers in collaboration with the extended team.</li>\n</ul>\n<ul>\n<li>Become the trusted advisor of your customer on the Data and AI landscape by successfully driving and delivering the adoption of the Databricks Data Intelligence Platform.</li>\n</ul>\n<ul>\n<li>Enabling Partners and support internal events in the MEA region.</li>\n</ul>\n<ul>\n<li>Scale best practices in your field by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>\n</ul>\n<ul>\n<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Experienced in customer interactions in a technical pre-sales capacity and adept in managing complex sales lifecycles.</li>\n</ul>\n<ul>\n<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences, requiring an ability to switch context and/or levels of technical depth.</li>\n</ul>\n<ul>\n<li>Ability to provide technical solutions for specialised customer needs, navigate a competitive landscape and effectively develop relationships to achieve long-term customer success.</li>\n</ul>\n<ul>\n<li>Hands-on expertise with complex Big Data architecture design for public cloud platform(s) solutions, focusing on use cases in Data Warehousing and Data Engineering architecture and implementation.</li>\n</ul>\n<p>Data Science and Machine Learning skills will be advantageous.</p>\n<ul>\n<li>Prior experience with coding in a core programming language (i.e., Python, SQL etc.) and willingness to learn Apache Spark™.</li>\n</ul>\n<ul>\n<li>Experience and skills on the Databricks platform will be highly advantageous for the role!</li>\n</ul>\n<ul>\n<li>Excellent communication skills in English required as a minimum. Fluency in Arabic will be highly preferable for the position.</li>\n</ul>\n<p>Key Notes:</p>\n<ul>\n<li>Location for the role will be in Paris (i.e. within a commutable distance for a hybrid schedule).</li>\n</ul>\n<ul>\n<li>You will need to be flexible and willing to travel to the United Arab Emirates for customer visits on a regular basis (i.e. up to ~2 weeks per month).</li>\n</ul>\n<ul>\n<li>We are seeking a candidate that will be interested in a future relocation to the region (Dubai) when an office is opened.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_19182c1d-b27","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8287419002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["customer interactions","technical pre-sales capacity","complex sales lifecycles","use case discovery","solution architecture designs","Big Data architecture design","public cloud platform(s)","Data Warehousing","Data Engineering","Apache Spark","Python","SQL"],"x-skills-preferred":["Data Science","Machine Learning","Arabic"],"datePosted":"2026-04-18T15:54:11.217Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris, France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"customer interactions, technical pre-sales capacity, complex sales lifecycles, use case discovery, solution architecture designs, Big Data architecture design, public cloud platform(s), Data Warehousing, Data Engineering, Apache Spark, Python, SQL, Data Science, Machine Learning, Arabic"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a78c8753-f89"},"title":"Staff Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are obsessed with enabling data teams to solve the world&#39;s toughest problems. We do this by building and running the world&#39;s best data and AI infrastructure platform, so our customers can focus on the high-value challenges that are central to their own missions.</p>\n<p>We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day. At our scale, we regularly observe cloud hardware, network, and operating system faults, and our software must gracefully shield our customers from any of the above.</p>\n<p>As a software engineer on the Runtime team at Databricks, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>\n<p>Below are some example projects:</p>\n<ul>\n<li>Apache Spark: Develop the de facto open source standard framework for big data.</li>\n<li>Data Plane Storage: Deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n<li>Delta Lake: A storage management system that combines the scale and cost-efficiency of data lakes, the performance and reliability of a data warehouse, and the low latency of streaming.</li>\n<li>Delta Pipelines: It&#39;s difficult to manage even a single data engineering pipeline. The goal of the Delta Pipelines project is to make it simple and possible to orchestrate and operate tens of thousands of data pipelines.</li>\n<li>Performance Engineering: Build the next generation query optimizer and execution engine that&#39;s fast, tuning-free, scalable, and robust.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS in Computer Science, related technical field or equivalent practical experience.</li>\n<li>Optional: MS or PhD in databases, distributed systems.</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables.</li>\n<li>Driven by delivering customer value and impact.</li>\n<li>8+ years of production-level experience in either Java, Scala, or C++.</li>\n<li>Strong foundation in algorithms and data structures and their real-world use cases.</li>\n<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop).</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a78c8753-f89","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6544364002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$192,000-$260,000 USD","x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:03.334Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9be280f4-cbc"},"title":"Software Engineer, Data Infrastructure","description":"<p>We&#39;re looking for an engineer to join our small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.</p>\n<p>As a software engineer on our data infrastructure team, you&#39;ll design, build, and operate scalable, fault-tolerant infrastructure for LLM Research: distributed compute, data orchestration, and storage across modalities. You&#39;ll develop high-throughput systems for data ingestion, processing, and transformation , including training data catalogs, deduplication, quality checks, and search. You&#39;ll also build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.</p>\n<p>You&#39;ll collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles. You&#39;ll implement and maintain monitoring and alerting to support platform reliability and performance.</p>\n<p>If you&#39;re excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we&#39;d love to hear from you.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9be280f4-cbc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachines.ai/","logo":"https://logos.yubhub.co/thinkingmachines.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5013919008","x-work-arrangement":"onsite","x-experience-level":null,"x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD","x-skills-required":["backend language (Python or Rust)","distributed compute frameworks (Apache Spark or Ray)","cloud infrastructure","data lake architectures","batch and streaming pipelines"],"x-skills-preferred":["Kafka","dbt","Terraform","Airflow","web crawler","deduplication","data mining","search","file formats and storage systems"],"datePosted":"2026-04-18T15:54:00.309Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend language (Python or Rust), distributed compute frameworks (Apache Spark or Ray), cloud infrastructure, data lake architectures, batch and streaming pipelines, Kafka, dbt, Terraform, Airflow, web crawler, deduplication, data mining, search, file formats and storage systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_601c2dc5-462"},"title":"Senior Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Our customers use deep data insights to improve their business. We are a customer-obsessed company that leaps at every opportunity to solve technical challenges.</p>\n<p>As a software engineer on the Runtime team at Databricks, you will be building the next generation distributed data storage and processing systems that can outperform specialized SQL query engines in relational query performance, yet provide the expressiveness and programming abstractions to support diverse workloads ranging from ETL to data science.</p>\n<p>Some example projects include:</p>\n<ul>\n<li>Developing the de facto open source standard framework for big data, Apache Spark.</li>\n<li>Providing reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</li>\n<li>Building the next generation query optimizer and execution engine that&#39;s fast, tuning-free, scalable, and robust.</li>\n</ul>\n<p>We look for candidates with a strong foundation in algorithms and data structures and their real-world use cases, experience with distributed systems, databases, and big data systems, and a BS (or higher) in Computer Science or a related technical field.</p>\n<p>The pay range for this role is $166,000-$225,000 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_601c2dc5-462","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6544325002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$225,000 USD","x-skills-required":["Java","Scala","C++","Apache Spark","Hadoop","Distributed systems","Databases","Big data systems"],"x-skills-preferred":["Algorithms","Data structures","Real-world use cases","Cloud storage backends","Query optimizer","Execution engine"],"datePosted":"2026-04-18T15:53:54.425Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Apache Spark, Hadoop, Distributed systems, Databases, Big data systems, Algorithms, Data structures, Real-world use cases, Cloud storage backends, Query optimizer, Execution engine","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a1ccc8c6-f09"},"title":"Geo Hunter Account Executive, Manufacturing & High-Tech","description":"<p>As a Geo Hunter Enterprise Account Executive at Databricks, you will be responsible for selling into and activating Large Manufacturing accounts. You will be a strategic sales professional with experience in selling innovation and change through customer vision expansion. Your goal will be to guide deals forward to compress decision cycles and close exciting deals. We offer accelerators above 100% quota attainment.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Meeting with CIOs, IT executives, LOB executives, Program Managers, and other important partners</li>\n<li>Closing both new accounts and existing accounts</li>\n<li>Identifying and closing quick, small wins while managing longer, complex sales cycles</li>\n<li>Exceeding activity, pipeline, and revenue targets</li>\n<li>Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>\n<li>Using a solution-based approach to selling and creating value for customers</li>\n<li>Promoting Databricks&#39; enterprise cloud data platform powered by Apache Spark</li>\n<li>Ensuring 100% satisfaction among all customers</li>\n<li>Prioritizing opportunities and applying appropriate resources</li>\n<li>Building a plan for success internally at Databricks and externally with your accounts</li>\n</ul>\n<p>We are looking for someone with:</p>\n<ul>\n<li>Previous experience in an early-stage company and knowledge of how to navigate and be successful</li>\n<li>Field sales experience within big data, Cloud, or SaaS sales</li>\n<li>Experience managing large, complex Manufacturing accounts is preferred</li>\n<li>Prior customer relationships with CIOs, program managers, and essential decision makers</li>\n<li>Ability to simply articulate intricate cloud technologies</li>\n<li>5+ years experience exceeding sales quotas</li>\n<li>Success closing new accounts while working existing accounts</li>\n<li>Understanding of Spark and big data preferable</li>\n<li>Passion for cloud technologies</li>\n<li>Bachelor&#39;s Degree</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a1ccc8c6-f09","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8193347002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$167,100-$229,800 USD","x-skills-required":["big data","Cloud","SaaS sales","sales quotas","Spark","Apache Spark","Delta Lake","MLflow"],"x-skills-preferred":["cloud technologies","customer vision expansion","solution-based approach","customer satisfaction"],"datePosted":"2026-04-18T15:53:53.336Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"big data, Cloud, SaaS sales, sales quotas, Spark, Apache Spark, Delta Lake, MLflow, cloud technologies, customer vision expansion, solution-based approach, customer satisfaction","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":167100,"maxValue":229800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_afe090d9-2bd"},"title":"Technical Support Engineer Intern (Summer 2026)","description":"<p>About Us</p>\n<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. We&#39;re looking for builders who see the cracks in the Internet that everyone else has simply learned to live with.</p>\n<p>As a Technical Support Engineer Intern at Cloudflare, you&#39;ll work directly with customers and cross-functional teams to tackle a variety of technical challenges. You&#39;ll gain hands-on experience with our products, learn the inner workings of Cloudflare&#39;s offerings, and deepen your understanding of internet technologies.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Collaborate with senior engineers to analyze and troubleshoot customer issues</li>\n<li>Track support requests using our ticketing system</li>\n<li>Participate in team meetings to discuss and share feedback</li>\n<li>Help create and update technical documentation and run books</li>\n<li>Provide feedback on our product and potential improvements based on customer interactions</li>\n<li>Support the team in testing new releases and reporting bugs</li>\n<li>Perform other duties/projects as assigned</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>Currently pursuing an undergraduate degree in a Computer Sciences or related field</li>\n<li>Self-driven and capable of learning new technologies/systems/features with some guidance</li>\n<li>Fundamental understanding of how the Internet works (OSI Model)</li>\n<li>Experience using Linux</li>\n<li>Experience in command line and tools, including curl, dig, traceroute, openssl and git</li>\n<li>Experience writing scripts in Bash, Python, JavaScript, or other scripting languages</li>\n<li>Awareness of what DNS, SSL/TLS and HTTP is and how these function</li>\n<li>Awareness of or experience installing and configuring web servers like Apache, Nginx, and IIS</li>\n<li>Must be able to work 40 hours a week</li>\n<li>Must be able to commit to a 12 week program</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Experience troubleshooting network connectivity issues, BGP routing, and GRE tunnels</li>\n<li>You are familiar with Cloudflare and have a site actively using our platform</li>\n</ul>\n<p>Super Bonus Points</p>\n<ul>\n<li>You are fluent and can troubleshoot in Mandarin, Spanish, and Portuguese</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>\n<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_afe090d9-2bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7726879","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["Linux","curl","dig","traceroute","openssl","git","Bash","Python","JavaScript","DNS","SSL/TLS","HTTP","Apache","Nginx","IIS"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:41.341Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"In-Office"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux, curl, dig, traceroute, openssl, git, Bash, Python, JavaScript, DNS, SSL/TLS, HTTP, Apache, Nginx, IIS"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4ea7999b-3d8"},"title":"Resident Solutions Architect - Healthcare & Life Sciences","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4ea7999b-3d8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8494145002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems","Apache Spark","CI/CD","MLOps","end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:02.737Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Austin, Texas"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8aadecbf-9e0"},"title":"Geo Hunter Account Executive, Manufacturing","description":"<p>As a Geo Hunter Account Executive at Databricks, you will be a strategic sales professional experienced in selling into and activating Large Manufacturing accounts. You will know how to sell innovation and change through customer vision expansion and guide deals forward to compress decision cycles. You will love understanding a product in depth and be passionate about communicating its value to Customers and System Integrators.</p>\n<p>Your responsibilities will include meeting with CIOs, IT executives, LOB executives, Program Managers, and other important partners, closing both new accounts and existing accounts, identifying and closing quick, small wins while managing longer, complex sales cycles, exceeding activity, pipeline, and revenue targets, tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce, using a solution-based approach to selling and creating value for customers, promoting Databricks&#39; enterprise cloud data platform powered by Apache Spark, ensuring 100% satisfaction among all customers, prioritizing opportunities and applying appropriate resources, and building a plan for success internally at Databricks and externally with your accounts.</p>\n<p>We look for individuals who have previously worked in an early stage company and know how to navigate and be successful, have field sales experience within big data, Cloud, or SaaS sales, have experience managing large, complex Manufacturing accounts, have prior customer relationships with CIOs, program managers, and essential decision makers, can simply articulate intricate cloud technologies, have 5+ years experience exceeding sales quotas, have success closing new accounts while working existing accounts, and have an understanding of Spark and big data.</p>\n<p>The pay range for this role is $167,100-$229,800 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8aadecbf-9e0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8438296002","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$167,100-$229,800 USD","x-skills-required":["big data","Cloud","SaaS sales","Salesforce","Apache Spark","customer relationship management","solution-based selling"],"x-skills-preferred":["Spark","cloud technologies"],"datePosted":"2026-04-18T15:52:35.293Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"California; Remote - Colorado; Remote - Oregon; Remote - Washington"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"big data, Cloud, SaaS sales, Salesforce, Apache Spark, customer relationship management, solution-based selling, Spark, cloud technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":167100,"maxValue":229800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e111d755-f4e"},"title":"Senior Solutions Architect - AI/BI","description":"<p>The Solutions Architect (AI/BI) team executes on Databricks&#39; strategic Product Operating Model that provides enhanced focus on earlier stage, highly prioritized product lines in order to establish product market fit, and set the course for rapid revenue growth.</p>\n<p>They are part of a global go-to-market team mandate, though individually will cover a specific, local region. Clients may span across one or more business units and verticals.</p>\n<p>By working in partnership with direct account teams, they will jointly engage clients, foster the necessary relationships, position in-depth the specific product line, so as to provide compelling reasons for clients to adopt and grow the usage of the given product.</p>\n<p>The Solutions Architect (AI/BI) is paired with an Account Executive aligned to a given product line with specific targets accordingly. Together, they will devise and implement a strategy across their assigned set of accounts, develop presentations, demos, and other assets and deliver them such that clients make an informed decision as they decide to adopt the product-line in a meaningful way.</p>\n<p>The AI/BI product-line requires the following core technical competencies:</p>\n<ul>\n<li>Experience in designing and delivering cloud-based Data Visualisation and Analytics Solutions in a client or customer environment</li>\n<li>Ability to advise customers in lakehouse analytics architecture: Prepare Databricks stakeholders for internal conversations and communicate directly, including anticipating blockers and address them before they become an issue</li>\n<li>Certification and/or demonstrated competence in data visualisation and analytics systems along with one of Azure, AWS, or GCP cloud providers</li>\n<li>Demonstrated competence in the Lakehouse architecture including hands-on experience with Apache Spark, Python, and SQL</li>\n</ul>\n<p>The impact you will have:</p>\n<ul>\n<li>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</li>\n<li>As a trusted advisor, serve as an expert Solutions Architect and &quot;champion,&quot; building technical credibility with stakeholders to drive product adoption and vision.</li>\n<li>Enable clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</li>\n<li>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams</li>\n<li>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical environments.</li>\n</ul>\n<p>Competencies &amp; Responsibilities:</p>\n<ul>\n<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>\n<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>\n<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes.</li>\n<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>\n<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>\n<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>\n<li>Undergraduate degree (or higher) in a technical field such as Computer Science, Applied Mathematics, Engineering, or similar.</li>\n<li>A track record of driving complex projects to completion in fast-paced, customer-facing environments.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e111d755-f4e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437289002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cloud-based Data Visualisation and Analytics Solutions","Lakehouse analytics architecture","Data visualisation and analytics systems","Apache Spark","Python","SQL"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:34.674Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam, Netherlands"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud-based Data Visualisation and Analytics Solutions, Lakehouse analytics architecture, Data visualisation and analytics systems, Apache Spark, Python, SQL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ba30b234-c68"},"title":"Senior Data Engineer, Payments","description":"<p>We&#39;re looking for a Senior Data Engineer to join our Payments team. As a critical part of our operations, you&#39;ll handle data related to compliance with Tax, Payments, and Legal regulations. You&#39;ll design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds.</p>\n<p>Your work will involve developing data models that enable the efficient analysis and manipulation of data for merchandising optimization, ensuring data quality, consistency, and accuracy. You&#39;ll also develop high-quality data assets for product use-cases by partnering with Product, AI/ML, and Data Science teams.</p>\n<p>As a Senior Data Engineer, you&#39;ll contribute to creating standards and best practices for Airbnb&#39;s Data Engineering and shape the tools, processes, and standards used by the broader data community. You&#39;ll collaborate with cross-functional teams to define data requirements and deliver data solutions that drive merchandising and sales improvements.</p>\n<p>To succeed in this role, you&#39;ll need 6+ years of relevant industry experience, a BE/B.Tech in Computer Science or a relevant technical degree, and hands-on experience in DSA coding, data structure, and algorithm. You&#39;ll also need extensive experience designing, building, and operating robust distributed data platforms and handling data at the petabyte scale.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ba30b234-c68","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7256787","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Scala","Python","data processing technologies","query authoring (SQL)","ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue)","data warehousing concepts","relational databases (PostgreSQL, MySQL)","columnar databases (Redshift, BigQuery, HBase, ClickHouse)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:13.348Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Python, data processing technologies, query authoring (SQL), ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue), data warehousing concepts, relational databases (PostgreSQL, MySQL), columnar databases (Redshift, BigQuery, HBase, ClickHouse)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d1a7c541-3a1"},"title":"Senior Software Engineer - Distributed Data Systems","description":"<p>We are seeking a senior software engineer to join our team in Belgrade. As a founding member of our Belgrade site, you will be involved in the entire development cycle and exemplify all core Databricks values. Your responsibilities will include driving requirements clarity and design decisions for ambiguous problems, producing technical design documents and project plans, developing new features, mentoring more junior engineers, testing and rolling out to production, and monitoring.</p>\n<p>To be successful in this role, you will need a BS in Computer Science or equivalent practical experience in databases or distributed systems, comfort working towards a multi-year vision with incremental deliverables, motivation by delivering customer value and impact, and 5+ years of production-level experience in either Java, Scala, or C++. You should also have a solid foundation in algorithms and data structures and their real-world use cases, experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop), and a strong understanding of software engineering principles and practices.</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p>Our commitment to diversity and inclusion is a key part of our culture, and we take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d1a7c541-3a1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8012800002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:08.194Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgrade, Serbia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_462806a6-650"},"title":"Technical Support Engineer Intern (Summer 2026)","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>We were named to Entrepreneur Magazine&#39;s Top Company Cultures list and ranked among the World&#39;s Most Innovative Companies by Fast Company.</p>\n<p>About the Department</p>\n<p>The Customer Support Team solves complicated problems and answers technical inquiries via phone, email, chat, and social media. Whether it is a Wordpress blogger using our services for free or a global Enterprise business with petabytes of web traffic, our team is always eager to assist.</p>\n<p>We are the eyes and ears of Cloudflare, acting as the real-time voice of the customer to help communicate their needs and real-world use cases back to the rest of the company - to help build a better service and future product development.</p>\n<p>What You&#39;ll Do</p>\n<p>Do you love solving complex technical problems and interacting with people? As a Technical Support Engineer Intern at Cloudflare, you&#39;ll work directly with customers and cross-functional teams to tackle a variety of technical challenges.</p>\n<p>You&#39;ll gain hands-on experience with our products, learn the inner workings of Cloudflare&#39;s offerings, and deepen your understanding of internet technologies. This role also provides opportunities to develop valuable technical and professional skills, as well as job shadowing experiences to explore different roles within the company.</p>\n<p>Join us to enhance your skill set while making a real impact!</p>\n<p>Responsibilities</p>\n<ul>\n<li>Collaborate with senior engineers to analyze and troubleshoot customer issues</li>\n<li>Track support requests using our ticketing system</li>\n<li>Participate in team meetings to discuss and share feedback</li>\n<li>Help create and update technical documentation and run books</li>\n<li>Provide feedback on our product and potential improvements based on customer interactions</li>\n<li>Support the team in testing new releases and reporting bugs</li>\n<li>Perform other duties/projects as assigned</li>\n</ul>\n<p>Skills and Requirements</p>\n<ul>\n<li>Currently pursuing an undergraduate degree in a Computer Sciences or related field</li>\n<li>Self-driven and capable of learning new technologies/systems/features with some guidance</li>\n<li>Fundamental understanding of how the Internet works (OSI Model); Cloudflare has a variety of products that presently impact Layers 3, 4 &amp; 7</li>\n<li>Experience using Linux</li>\n<li>Experience in command line and tools, including curl, dig, traceroute, openssl and git</li>\n<li>Experience writing scripts in Bash, Python, JavaScript, or other scripting languages</li>\n<li>Awareness of what DNS, SSL/TLS and HTTP is and how these function</li>\n<li>Awareness of or experience installing and configuring web servers like Apache, Nginx, and IIS</li>\n<li>Must be able to work 40 hours a week</li>\n<li>Must be able to commit to a 12 week program</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Experience troubleshooting network connectivity issues, BGP routing, and GRE tunnels</li>\n<li>You are familiar with Cloudflare and have a site actively using our platform</li>\n</ul>\n<p>Super Bonus Points</p>\n<ul>\n<li>You are fluent and can troubleshoot in Mandarin, Spanish, and Portuguese</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_462806a6-650","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7726977","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["Linux","curl","dig","traceroute","openssl","git","Bash","Python","JavaScript","DNS","SSL/TLS","HTTP","Apache","Nginx","IIS"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:36.331Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"In-Office"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux, curl, dig, traceroute, openssl, git, Bash, Python, JavaScript, DNS, SSL/TLS, HTTP, Apache, Nginx, IIS"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b00b781c-eba"},"title":"Senior Software Engineer - Database Engine Internals","description":"<p>We&#39;re seeking a Senior Software Engineer to join our team in designing next-generation systems for database engine internals. As part of this multi-year journey, you&#39;ll drive requirements clarity and design decisions for ambiguous problems. Your responsibilities will include producing technical design documents and project plans, developing new features, mentoring junior engineers, testing and rolling out to production, and monitoring.</p>\n<p>Our ideal candidate has a passion for database systems, storage systems, distributed systems, language design, or performance optimisation. They should be comfortable working towards a multi-year vision with incremental deliverables and be customer-oriented with a focus on having an impact. A minimum of 5 years of experience working in a related system is required, with a PhD in databases or distributed systems being optional.</p>\n<p>In return, we offer a comprehensive benefits package and a commitment to diversity and inclusion. If you&#39;re excited about the opportunity to join our team and contribute to the development of next-generation database systems, please apply.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b00b781c-eba","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8012809002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["database systems","storage systems","distributed systems","language design","performance optimisation","Apache Spark","Delta Lake","MLflow"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:15.118Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgrade, Serbia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database systems, storage systems, distributed systems, language design, performance optimisation, Apache Spark, Delta Lake, MLflow"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1125d83c-1eb"},"title":"Staff Software Engineer - Backend","description":"<p>As a Staff Software Engineer with a backend focus, you will work closely with your team and product management to prioritise, design, implement, test, and operate micro-services for the Databricks platform and product.</p>\n<p>This involves writing software in Scala/Java, building data pipelines (Apache Spark, Apache Kafka), integrating with third-party applications, and interacting with cloud APIs (AWS, Azure, CloudFormation, Terraform).</p>\n<p>You will be part of a team that builds highly technical products that fulfil real, important needs in the world. We constantly push the boundaries of data and AI technology, while simultaneously operating with the resilience, security and scale that is critical to making customers successful on our platform.</p>\n<p>Our engineering teams build one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day.</p>\n<p>We run thousands of Kubernetes clusters across all regions and orchestrate millions of VMs on a daily basis.</p>\n<p>Competencies:</p>\n<ul>\n<li>BS/MS/PhD in Computer Science, or a related field</li>\n<li>10+ years of production level experience in one of: Java, Scala, C++, or similar language</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Experience in architecting, developing, deploying, and operating large scale distributed systems</li>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>\n<li>Good knowledge of SQL</li>\n<li>Experience with software security and systems that handle sensitive data</li>\n<li>Experience with cloud technologies, e.g. AWS, Azure, GCP, Docker, Kubernetes</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1125d83c-1eb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/6779233002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$182,400-$247,000 USD","x-skills-required":["Java","Scala","C++","Apache Spark","Apache Kafka","Cloud APIs","AWS","Azure","CloudFormation","Terraform","SQL","Software security","Cloud technologies"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:07.479Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Apache Spark, Apache Kafka, Cloud APIs, AWS, Azure, CloudFormation, Terraform, SQL, Software security, Cloud technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":182400,"maxValue":247000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f5cb25c4-0c1"},"title":"Strategic Core Account Executive - Retail","description":"<p>As a Strategic Core Account Executive - Retail at Databricks, you will be responsible for managing a strategic enterprise client in the Retail vertical. You will come with an informed point of view on Big Data, Advanced Analytics and AI which will help to guide your successful execution strategy and allow you to provide genuine value to the client.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building relationships with CIOs, IT executives, LOB executives, Program Managers, and other important partners.</li>\n<li>Driving value-based growth within the account.</li>\n<li>Expanding the Databricks footprint into new business units and use cases.</li>\n<li>Exceeding activity, pipeline, and revenue targets.</li>\n<li>Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce.</li>\n<li>Using a solution-based approach to selling and creating value for customers.</li>\n<li>Promoting Databricks&#39; Data Intelligence Platform powered by Apache Spark™ and Delta Lake.</li>\n<li>Prioritizing opportunities and leveraging appropriate resources.</li>\n<li>Building a plan for success internally at Databricks and externally with your account.</li>\n</ul>\n<p>We are looking for someone with 7+ years of Enterprise Sales experience exceeding quotas in larger accounts, managing a small set of enterprise accounts rather than a broad territory, and a Bachelor&#39;s Degree.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f5cb25c4-0c1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8458710002","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$311,600-$428,450 USD","x-skills-required":["Enterprise Sales","Big Data","Advanced Analytics","AI","Salesforce","Apache Spark","Delta Lake"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:06.172Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Ohio"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Enterprise Sales, Big Data, Advanced Analytics, AI, Salesforce, Apache Spark, Delta Lake","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":311600,"maxValue":428450,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7eb73baf-db6"},"title":"Engineering Manager - Streaming","description":"<p>We are seeking a dedicated Engineering Leader to spearhead Spark Structured Streaming development initiatives. Your primary mission will be to make Spark Structured Streaming state of the art Stream Processing engine by adding advanced features such as sophisticated state management, new operators and making the engine performance both from latency and throughput point of view by reimagining engine architecture.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading a talented engineering team in Spark Structured Streaming team developing and promoting the engine in OSS and the Databricks Data Intelligence Platform</li>\n<li>Overseeing sustained recruitment of top-tier talent, and upskilling talent on the team</li>\n<li>Implementing robust processes to efficiently execute product vision, strategy, and roadmap in alignment with organisational goals and priorities</li>\n<li>Build software that is not just high quality but easy to operate</li>\n<li>Make company wide impact by driving Stream Processing adoption across the Databricks product portfolio</li>\n<li>Manage technical debt, including long term technical architecture decisions and balance product roadmap</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years experience working in a related system, streaming, query processing, query optimisation, including big-data ecosystem, Apache Spark or database internal</li>\n<li>A passion for database systems, storage systems, distributed systems, language design, or performance optimisation</li>\n<li>Can ensure the team builds high quality and reliable infrastructure services. Experience being responsible for testing, quality, and SLAs of a product</li>\n<li>Previous experience building and leading teams in a complex technical domain, such as on distributed data systems or database internals</li>\n<li>Ability to attract, hire, and coach engineers who meet the Databricks hiring standards. Can up level existing team via hiring top-notch senior talent, growing leaders and helping struggling members. Can gain trust of the team and guide their careers</li>\n<li>Comfortable working cross functionally with product management and directly with customers; ability to deeply understand product and customer personas</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7eb73baf-db6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8324875002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$181,000-$253,750 USD","x-skills-required":["Apache Spark","Streaming","Query processing","Query optimisation","Big-data ecosystem","Database internal"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:40.103Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Streaming, Query processing, Query optimisation, Big-data ecosystem, Database internal","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":181000,"maxValue":253750,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eda5b2b8-a68"},"title":"Senior Solutions Architect - AI/BI","description":"<p>We are seeking a Senior Solutions Architect - AI/BI to join our Field Engineering team in London. The successful candidate will be responsible for executing on Databricks&#39; strategic Product Operating Model, providing enhanced focus on earlier stage, highly prioritized product lines to establish product market fit and set the course for rapid revenue growth.</p>\n<p>As a Senior Solutions Architect - AI/BI, you will work in partnership with direct account teams to jointly engage clients, foster necessary relationships, position in-depth the specific product line, and provide compelling reasons for clients to adopt and grow the usage of the given product.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Collaborating with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</li>\n<li>Serving as a trusted advisor, expert Solutions Architect, and champion, building technical credibility with stakeholders to drive product adoption and vision.</li>\n<li>Enabling clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</li>\n<li>Influencing product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams.</li>\n</ul>\n<p>To succeed in this role, you will need:</p>\n<ul>\n<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>\n<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>\n<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes.</li>\n<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>\n<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>\n<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>\n</ul>\n<p>If you are a motivated and experienced professional with a passion for data and AI, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eda5b2b8-a68","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8407183002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Experience in designing and delivering cloud-based Data Visualisation and Analytics Solutions","Ability to advise customers in lakehouse analytics architecture","Certification and/or demonstrated competence in data visualisation and analytics systems along with one of Azure, AWS or GCP cloud providers","Demonstrated competence in the Lakehouse architecture including hands-on experience with Apache Spark, Python and SQL"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:38.084Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Experience in designing and delivering cloud-based Data Visualisation and Analytics Solutions, Ability to advise customers in lakehouse analytics architecture, Certification and/or demonstrated competence in data visualisation and analytics systems along with one of Azure, AWS or GCP cloud providers, Demonstrated competence in the Lakehouse architecture including hands-on experience with Apache Spark, Python and SQL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_245a7b5f-cac"},"title":"Staff Software Engineer (Infrastructure)","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>As a Staff Software Engineer at Databricks India, you can get to work across various domains, including backend infrastructure, distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</p>\n<p>Our Infrastructure Backend teams span many domains across our essential service platforms. For instance, you might work on challenges such as:</p>\n<ul>\n<li>Problems that span from product to infrastructure including: distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>\n</ul>\n<ul>\n<li>Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n</ul>\n<ul>\n<li>Build reliable, scalable services, e.g. Scala, Kubernetes, and data pipelines, e.g. Apache Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS (or higher) in Computer Science, or a related field</li>\n</ul>\n<ul>\n<li>12+ years of production level experience in one of: Python, Java, Scala, C++, or similar language</li>\n</ul>\n<ul>\n<li>6+ years experience developing large-scale distributed systems from scratch</li>\n</ul>\n<ul>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>\n</ul>\n<ul>\n<li>Experience working on Infrastructure related projects is a plus</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_245a7b5f-cac","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7648674002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","C++","AWS S3","Azure Blob Store","Kubernetes","Apache Spark","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:04.399Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, AWS S3, Azure Blob Store, Kubernetes, Apache Spark, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dcd95220-cc8"},"title":"Engineering Manager - Pipelines Engine","description":"<p>We are seeking a dedicated Engineering Leader to spearhead the Pipelines Engine engineering team. The team is responsible for building next generation Runtime ETL features and ensuring that Lakeflow Pipelines has state of art performance for ETL workloads. You will also spearhead the Agentic Data Engineering infrastructure, by building the next generation engine to power agentic pipeline authoring, execution and maintenance.</p>\n<p>The main responsibilities include:</p>\n<ul>\n<li>Lead an engineering team building the next-generation ETL features for the Databricks Lakeflow platform.</li>\n<li>Oversee sustained recruitment of top-tier talent, and upskilling talent on the team.</li>\n<li>Build processes to implement product vision and strategy, according to organisational goals and priorities.</li>\n<li>Build software that is not just high quality but easy to operate.</li>\n<li>Manage technical debt, including long-term technical architecture decisions and balance product roadmap.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Minimum 3 years of experience in managing top-tier engineering teams.</li>\n<li>5+ years experience building data infrastructure systems such as Apache Spark or database internals.</li>\n<li>A passion for database systems, storage systems, distributed systems, or performance optimisation.</li>\n<li>Experience working with product management, and directly with customers; ability to understand customer needs.</li>\n<li>Can ensure the team builds high quality and reliable infrastructure services.</li>\n<li>Experience being responsible for testing, quality, and Service Level Agreements of a product.</li>\n<li>Experience building and managing teams in a complex technical domain, such as on distributed data systems or database internals.</li>\n<li>Expertise in attracting, hiring and coaching engineers, who will meet the Databricks hiring standards.</li>\n<li>Experience up-leveling teams via hiring top-notch talent and growing existing team members.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dcd95220-cc8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8467083002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,000-$261,250 USD","x-skills-required":["Apache Spark","database internals","distributed systems","performance optimisation","product management","customer needs","testing","quality","Service Level Agreements"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:57.939Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California; San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, database internals, distributed systems, performance optimisation, product management, customer needs, testing, quality, Service Level Agreements","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":261250,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_85f1f87e-70f"},"title":"Resident Solutions Architect - Financial Services","description":"<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>9+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Capable of design and deployment of highly performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>\n</ul>\n<ul>\n<li>Travel to customers up to 20% of the time</li>\n</ul>\n<p>Nice to have: Databricks Certification</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_85f1f87e-70f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461327002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:49:55.028Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Austin, Texas"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_821d6af4-827"},"title":"Senior Solutions Architect - AI/BI","description":"<p>The Solutions Architect (AI/BI) team executes on Databricks&#39; strategic Product Operating Model to establish product market fit and set the course for rapid revenue growth.</p>\n<p>As a Senior Solutions Architect - AI/BI, you will collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Collaborating with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</li>\n<li>Serving as a trusted advisor and expert Solutions Architect, building technical credibility with stakeholders to drive product adoption and vision.</li>\n<li>Enabling clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</li>\n<li>Influencing product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams.</li>\n</ul>\n<p>To be successful in this role, you will need:</p>\n<ul>\n<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>\n<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>\n<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes.</li>\n<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>\n<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>\n<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>\n</ul>\n<p>Required skills include:</p>\n<ul>\n<li>Experience in designing and delivering cloud-based Data Visualisation and Analytics Solutions in a client or customer environment.</li>\n<li>Ability to advise customers in lakehouse analytics architecture.</li>\n<li>Certification and/or demonstrated competence in data visualisation and analytics systems along with one of Azure, AWS or GCP cloud providers.</li>\n<li>Demonstrated competence in the Lakehouse architecture including hands-on experience with Apache Spark, Python and SQL.</li>\n</ul>\n<p>Preferred skills include:</p>\n<ul>\n<li>Experience with Databricks products and services.</li>\n<li>Knowledge of data science and machine learning concepts.</li>\n</ul>\n<p>This is a senior-level role that requires a strong background in data and AI, as well as excellent communication and collaboration skills.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_821d6af4-827","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437301002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cloud-based Data Visualisation and Analytics Solutions","Lakehouse analytics architecture","Data visualisation and analytics systems","Apache Spark","Python","SQL"],"x-skills-preferred":["Databricks products and services","Data science and machine learning concepts"],"datePosted":"2026-04-18T15:49:47.877Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud-based Data Visualisation and Analytics Solutions, Lakehouse analytics architecture, Data visualisation and analytics systems, Apache Spark, Python, SQL, Databricks products and services, Data science and machine learning concepts"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ffd169d9-40b"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n</ul>\n<ul>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ffd169d9-40b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461239002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","data platforms & analytics","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:46.649Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Atlanta, Georgia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, data platforms & analytics, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6bd506fa-79a"},"title":"Strategic Enterprise Account Executive (Digital Natives) | Eastern EMEA","description":"<p>Do you want to solve the world&#39;s toughest problems using the power of Data and AI? At Databricks, that is our daily reality. We are the pioneers of the Data Lakehouse, and we are looking for a world-class Strategic Enterprise Account Executive to join our Eastern EMEA team.</p>\n<p>Your mission is high-stakes: you will own and scale one of our most significant Strategic Scaleups (Digital Natives) in the region. This isn&#39;t just a sales role; it is a partnership with a global unicorn that has transitioned into a massive enterprise. You will guide them through the next frontier of AI transformation.</p>\n<p><strong>The Impact You Will Have</strong></p>\n<ul>\n<li>Architect the Strategy: Co-author a multi-year business plan with your team and ecosystem partners to exceed quarterly booking goals and accelerate customer usage.</li>\n<li>Master the Use Case: Lead a &#39;Special Forces&#39; team of technical experts and partners to identify high-impact Big Data and AI use cases, proving the undeniable value of the Databricks Platform.</li>\n<li>Drive Transformation: Execute your customer&#39;s AI roadmap through a blend of strategic partnerships, expert professional services, and high-level Executive Engagement.</li>\n<li>Build Technical Trust: Develop a deep understanding of our product roadmap to become a trusted advisor to both C-level visionaries and technical champions.</li>\n</ul>\n<p><strong>What We Look For</strong></p>\n<ul>\n<li>The &#39;Unicorn&#39; Expert: Proven experience building deep, influential relationships with large, global &#39;mature unicorns.&#39; You understand the high-velocity, high-complexity culture of Digital Natives.</li>\n<li>Industry Pedigree: Deep roots in the Big Data, Cloud, or SaaS sectors. You don&#39;t just know the buzzwords; you understand the architecture.</li>\n<li>A Track Record of Winning: Consistent history of over-achieving quotas at high-growth Enterprise software companies.</li>\n<li>Consumption Model Mastery: Experience driving usage-based and &#39;commit-and-expand&#39; engagement models.</li>\n<li>Ecosystem Orchestrator: Skilled in co-selling with Cloud Giants (AWS, Azure, GCP) and Global Systems Integrators (GSIs).</li>\n<li>Value-Based Seller: Expert at building data-driven business cases that secure immediate buy-in from C-level executives.</li>\n<li>Language: Professional proficiency in English</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6bd506fa-79a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8349751002","x-work-arrangement":"hybrid","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data","Cloud","SaaS","Data Lakehouse","Apache Spark","Delta Lake","MLflow","AI","Machine Learning"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:35.349Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Big Data, Cloud, SaaS, Data Lakehouse, Apache Spark, Delta Lake, MLflow, AI, Machine Learning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d99dda6c-8c3"},"title":"Senior Software Engineer (Infrastructure)","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform to enable data teams to solve the world&#39;s toughest problems. Our Infrastructure Backend teams span many domains across our essential service platforms.</p>\n<p>As a Senior Software Engineer at Databricks India, you can get to work across various challenges such as:</p>\n<ul>\n<li>Distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>\n<li>Delivering reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n<li>Building reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Apache Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>We are looking for a Senior Software Engineer with 7+ years of production-level experience in one of the following languages: Python, Java, Scala, C++, or similar language. You should also have 4+ years of experience developing large-scale distributed systems from scratch, experience working on a SaaS platform or with Service-Oriented Architectures, and experience working on Infrastructure-related projects.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d99dda6c-8c3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7647289002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","C++","AWS S3","Azure Blob Store","Apache Spark","Databricks","Kubernetes","Distributed systems","Service-Oriented Architectures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:23.954Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, AWS S3, Azure Blob Store, Apache Spark, Databricks, Kubernetes, Distributed systems, Service-Oriented Architectures"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9a2bbb70-2c0"},"title":"Senior Software Engineer - Data Platform","description":"<p>We are seeking a Senior Software Engineer to join our team in Bengaluru, India. As a Senior Software Engineer at Databricks, you will be responsible for designing, developing, and deploying large-scale distributed systems, including backend, DDS, and full-stack engineering. You will work closely with our product management team to bring great user experiences to our customers.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop reliable and high-performance services and client libraries for storing and accessing large amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</li>\n<li>Build scalable services using Scala, Kubernetes, and data pipelines, such as Apache Spark and Databricks.</li>\n<li>Work on a SaaS platform or with Service-Oriented Architectures.</li>\n<li>Collaborate with our DDS team to develop and deploy data-centric solutions using Apache Spark, Data Plane Storage, Delta Lake, and Delta Pipelines.</li>\n<li>Develop and maintain high-quality code, following best practices and coding standards.</li>\n<li>Participate in code reviews and provide feedback to improve the quality of the codebase.</li>\n<li>Troubleshoot and resolve issues that arise during deployment and operation.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or a related field.</li>\n<li>7+ years of production-level experience in one of the following languages: Python, Java, Scala, C++, or similar language.</li>\n<li>Experience developing large-scale distributed systems from scratch.</li>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>\n<li>Strong understanding of software design patterns and principles.</li>\n<li>Excellent problem-solving skills and attention to detail.</li>\n<li>Ability to work effectively in a team environment.</li>\n<li>Strong communication and collaboration skills.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with Apache Spark, Data Plane Storage, Delta Lake, and Delta Pipelines.</li>\n<li>Knowledge of cloud-based storage systems, such as AWS S3 and Azure Blob Store.</li>\n<li>Familiarity with containerization using Docker and Kubernetes.</li>\n<li>Experience with continuous integration and continuous deployment (CI/CD) pipelines.</li>\n<li>Strong understanding of security principles and practices.</li>\n<li>Familiarity with agile development methodologies and version control systems, such as Git.</li>\n</ul>\n<p>Benefits:</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p>Our Commitment to Diversity and Inclusion:</p>\n<p>Databricks is an equal opportunities employer and welcomes applications from diverse candidates. We are committed to creating an inclusive and respectful work environment where everyone feels valued and empowered to contribute their best work.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9a2bbb70-2c0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7601580002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","C++","Apache Spark","Data Plane Storage","Delta Lake","Delta Pipelines","Kubernetes","Docker","Git","Agile development methodologies","Version control systems"],"x-skills-preferred":["Cloud-based storage systems","Containerization","Continuous integration and continuous deployment (CI/CD) pipelines","Security principles and practices"],"datePosted":"2026-04-18T15:49:17.527Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, Apache Spark, Data Plane Storage, Delta Lake, Delta Pipelines, Kubernetes, Docker, Git, Agile development methodologies, Version control systems, Cloud-based storage systems, Containerization, Continuous integration and continuous deployment (CI/CD) pipelines, Security principles and practices"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bac99a46-7f5"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts.</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n</ul>\n<ul>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bac99a46-7f5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461243002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","CI/CD","MLOps","distributed computing","Python","Scala","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:01.745Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Denver, Colorado"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_760c3e88-e35"},"title":"Senior Product Manager, Data","description":"<p>Job Title: Senior Product Manager, Data</p>\n<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>\n<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>\n<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>\n<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>\n<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>\n<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>\n<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>\n<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>\n<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>\n<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>\n<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>\n<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>\n<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>\n<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>\n<li>Awareness of data security, compliance, and governance best practices</li>\n<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p>Salary Range: $143,000 to $210,000</p>\n<p>Benefits:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Workplace:</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_760c3e88-e35","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4649824006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$143,000 to $210,000","x-skills-required":["data product management","data architecture","enterprise data engineering","data lakes","data warehouses","ETL/ELT and streaming pipelines","data governance frameworks","modern data stack technologies","Snowflake","BigQuery","Databricks","Apache Spark","Airflow","DBT","Kafka","data modeling","domain-driven design","scalable data platforms","BI and analytics platforms","Tableau","Looker","Power BI"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:58.405Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":143000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bf25e8de-318"},"title":"Director of Engineering (Data Infrastructure)","description":"<p>Job Title: Director of Engineering (Data Infrastructure)</p>\n<p>Location: Bengaluru, India</p>\n<p>We&#39;re looking for a seasoned Director of Engineering to lead our data infrastructure organization in Bengaluru. As a founding technical leader in our fastest-growing engineering hub, you will be responsible for building world-class teams and shaping architectural decisions that ripple across the company.</p>\n<p>About the Role:</p>\n<ul>\n<li>You will build the data infrastructure organization that makes Databricks&#39; continued growth possible.</li>\n<li>Establish foundational teams in Bengaluru owning the bedrock systems that guarantee billing correctness, operational resilience, and zero-downtime recovery across our entire monetization stack.</li>\n<li>Define what world-class infrastructure looks like for the next decade of data platforms.</li>\n</ul>\n<p>Responsibilities:</p>\n<ul>\n<li>Deliver the infrastructure vision for systems processing billions in daily billing transactions with zero tolerance for error.</li>\n<li>Build Bengaluru&#39;s data infrastructure organization by establishing it as the destination for India&#39;s top infrastructure talent.</li>\n<li>Own business-critical systems operating 24/7/365 across 100+ regions where even 99.9% uptime means hours of customer pain.</li>\n<li>Ship platforms that compound engineering leverage across Databricks.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>14+ years in distributed systems engineering with 6+ years leading infrastructure organizations and 4+ years managing managers at companies where infrastructure failures meant immediate revenue impact, customer escalations, or regulatory consequences.</li>\n<li>Technical depth across petabyte-scale data pipelines and distributed systems reliability.</li>\n<li>Track record defining multi-year infrastructure vision and translating it into sequential deliverables that show value quarterly.</li>\n<li>Experience building 99.999%+ reliable systems with established practices for SLOs/SLIs, chaos engineering, disaster recovery, and sophisticated observability.</li>\n<li>Proven ability to scale infrastructure organizations in high-growth environments.</li>\n<li>Communication skills to make complex infrastructure decisions legible to executives.</li>\n</ul>\n<p>What You&#39;ll Need:</p>\n<ul>\n<li>BS in Computer Science or Engineering; MS or Ph.D. preferred.</li>\n<li>Experience with Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems, or leading infrastructure through hypergrowth strongly preferred.</li>\n</ul>\n<p>Benefits:</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees.</p>\n<p>Our Commitment to Diversity and Inclusion:</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel.</p>\n<p>Compliance:</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to grant such access.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bf25e8de-318","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8290810002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["distributed systems engineering","infrastructure organizations","petabyte-scale data pipelines","distributed systems reliability","SLOs/SLIs","chaos engineering","disaster recovery","observability"],"x-skills-preferred":["Apache Spark","Delta Lake","large-scale data infrastructure","fintech/billing systems"],"datePosted":"2026-04-18T15:48:43.683Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems engineering, infrastructure organizations, petabyte-scale data pipelines, distributed systems reliability, SLOs/SLIs, chaos engineering, disaster recovery, observability, Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_26f523c0-bbd"},"title":"Resident Solutions Architect - Manufacturing","description":"<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n</ul>\n<ul>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues</li>\n</ul>\n<ul>\n<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n</ul>\n<ul>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD for production deployments</li>\n</ul>\n<ul>\n<li>Working knowledge of MLOps</li>\n</ul>\n<ul>\n<li>Design and deployment of performant end-to-end data architectures</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Ability to travel up to 30% when needed</li>\n</ul>\n<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_26f523c0-bbd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8494154002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems","Apache Spark","CI/CD","MLOps","end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:21.946Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Boston, Massachusetts"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_03807164-210"},"title":"Resident Solutions Architect","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the Manager, Professional Services.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical Big Data projects which may include building reference architectures, how-to&#39;s and production grade MVPs</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement strategic customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>10+ years experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native and Data Lakes in a customer-facing post-sales, technical architecture or consulting role</li>\n</ul>\n<ul>\n<li>6+ years of experience working on Big Data Architectures independently</li>\n</ul>\n<ul>\n<li>Strong experience working in the Databricks ecosystem</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala.</li>\n</ul>\n<ul>\n<li>Experience working across Cloud Platforms (GCP / AWS / Azure)</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Build skills in technical areas that support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n</ul>\n<ul>\n<li>Willingness to travel for onsite customer engagements within India.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_03807164-210","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8081658002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Cloud Native","Data Lakes","Python","Scala","GCP","AWS","Azure"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:20.843Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Cloud Native, Data Lakes, Python, Scala, GCP, AWS, Azure"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_beca8c16-9a6"},"title":"Director of Engineering (Data Infrastructure)","description":"<p>Job Title: Director of Engineering (Data Infrastructure)</p>\n<p>In this leadership opportunity, you will build the data infrastructure organization that makes Databricks&#39; continued growth possible. You&#39;ll establish foundational teams in Bengaluru owning the bedrock systems that guarantee billing correctness, operational resilience, and zero-downtime recovery across our entire monetization stack, alongside multi-region data ingestion, developer platforms, and deployment automation that eliminate friction at petabyte scale.</p>\n<p>This isn&#39;t about maintaining what exists; it&#39;s about architecting the infrastructure that enables Databricks to scale while reducing operational burden. You&#39;ll define what world-class infrastructure looks like for the next decade of data platforms.</p>\n<p>The impact you&#39;ll have:</p>\n<ul>\n<li>Deliver the infrastructure vision for systems processing billions in daily billing transactions with zero tolerance for error, building disaster recovery that&#39;s provably reliable, testing frameworks that catch what production sees, correctness systems that make billing errors structurally impossible, and observability that predicts failures before they happen</li>\n</ul>\n<ul>\n<li>Build Bengaluru&#39;s data infrastructure organization by establishing it as the destination for India&#39;s top infrastructure talent, hiring multiple engineering managers who become force multipliers, and creating a culture where solving hard distributed systems problems at scale is the daily work</li>\n</ul>\n<ul>\n<li>Own business-critical systems operating 24/7/365 across 100+ regions where even 99.9% uptime means hours of customer pain, driving reliability improvements that prevent millions in revenue loss while eliminating operational toil through frameworks that make systems self-healing, self-tuning, and self-documenting</li>\n</ul>\n<ul>\n<li>Ship platforms that compound engineering leverage across Databricks: correctness frameworks that catch billing errors before customers do, deployment automation that makes regional expansion push-button, data integration systems that process petabyte-scale flows without human intervention, and testing infrastructure where comprehensive coverage is automatic, not heroic</li>\n</ul>\n<ul>\n<li>Position infrastructure as product by treating internal engineering teams as customers with SLAs, measuring adoption and satisfaction, iterating based on feedback, and demonstrating that every dollar invested in infrastructure returns multiplicative gains in product velocity, reliability improvements, or cost reductions</li>\n</ul>\n<p>You&#39;ll need:</p>\n<ul>\n<li>14+ years in distributed systems engineering with 6+ years leading infrastructure organizations and 4+ years managing managers at companies where infrastructure failures meant immediate revenue impact, customer escalations, or regulatory consequences - and you built the systems and teams that made those failures rare</li>\n</ul>\n<ul>\n<li>Technical depth across petabyte-scale data pipelines and distributed systems reliability where you can engage from &#39;how should we architect multi-region disaster recovery&#39; to &#39;why is this Kafka cluster exhibiting this latency pattern&#39; while knowing when to coach versus when to decide</li>\n</ul>\n<ul>\n<li>Track record defining multi-year infrastructure vision and translating it into sequential deliverables that show value quarterly while building toward architectural end states, positioning infrastructure investments as business enablers rather than cost centers, and making build-vs-buy decisions that compound over time</li>\n</ul>\n<ul>\n<li>Experience building 99.999%+ reliable systems with established practices for SLOs/SLIs, chaos engineering, disaster recovery, and sophisticated observability that predicts failures before they happen</li>\n</ul>\n<ul>\n<li>Proven ability to scale infrastructure organizations in high-growth environments where you&#39;ve doubled engineering while maintaining quality bar, developed engineering managers, and created teams where retention is high because the problems are interesting and the culture is strong</li>\n</ul>\n<ul>\n<li>Communication skills to make complex infrastructure decisions legible to executives (translating technical investments into business outcomes), influence cross-functional partners without authority, build trust across global teams in different timezones with different working styles, and represent Databricks&#39; technical brand externally</li>\n</ul>\n<p>BS in Computer Science or Engineering; MS or Ph.D. preferred. Experience with Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems, or leading infrastructure through hypergrowth strongly preferred.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_beca8c16-9a6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8220993002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["distributed systems engineering","infrastructure organization","petabyte-scale data pipelines","distributed systems reliability","Apache Spark","Delta Lake","large-scale data infrastructure","fintech/billing systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:18.029Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems engineering, infrastructure organization, petabyte-scale data pipelines, distributed systems reliability, Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems"}]}