<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>1bd2d1b2-84f</externalid>
      <Title>Senior Machine Learning Researcher</Title>
      <Description><![CDATA[<p>We are seeking a senior machine learning researcher to join our Core AI team.</p>
<p>As part of the team, you will help solve complex business problems by developing viable cutting-edge AI/ML solutions.</p>
<p>You will develop and implement creative solutions that fundamentally transform business processes, delivering breakthrough improvements rather than incremental changes.</p>
<p>You will work closely with other AI/ML researchers and engineers, SWEs, product owners/managers, and business stakeholders, and participate in the full lifecycle of solution development, including requirements gathering with business, experimentation and algorithmic exploration, development, and assistance with productization.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work independently or as part of a team to help design and implement high accuracy and delightful user experience solutions utilizing ML, NLP, GenAI, Agentic technologies.</li>
</ul>
<ul>
<li>Participate in all aspects of solution development, including ideation and requirement gathering with business stakeholders, experimentation and exploration to identify strong solution approaches, solution development, etc.</li>
</ul>
<ul>
<li>Prototype, test, and iterate on novel AI models and approaches to solve complex business challenges.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to identify opportunities where AI can create significant business value, and transition solutions into production systems.</li>
</ul>
<ul>
<li>Research and stay updated with the latest advancements in machine learning and AI technologies.</li>
</ul>
<ul>
<li>Participate in code reviews, technical discussions, and knowledge sharing sessions.</li>
</ul>
<ul>
<li>Communicate technical concepts and transformative ideas effectively to both technical and non-technical stakeholders.</li>
</ul>
<p>Required Skills &amp; Qualifications:</p>
<ul>
<li>Bachelor&#39;s with 10+ years, Master&#39;s with 7+ years, or PhD with 5+ years in Computer Science, Data Science, Machine Learning, or related field.</li>
</ul>
<ul>
<li>Deep expertise and proven ability in developing high accuracy/value solutions to business problems in the NLP, Generative AI, Agentic AI, and/or ML space.</li>
</ul>
<ul>
<li>Hands-on experience with data processing, experimentation, and exploration.</li>
</ul>
<ul>
<li>Strong programming skills in Python.</li>
</ul>
<ul>
<li>Experience with cloud platforms (AWS, Azure, GCP) for deploying ML solutions.</li>
</ul>
<ul>
<li>Excellent problem-solving skills and attention to detail.</li>
</ul>
<ul>
<li>Strong communication skills to collaborate with technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Ability to work independently and collaboratively.</li>
</ul>
<p>Additional Preferred Skills &amp; Qualifications:</p>
<ul>
<li>Understanding of the financial markets, including experience with financial datasets, is strongly preferred.</li>
</ul>
<ul>
<li>Experience with ML frameworks such as PyTorch, TensorFlow.</li>
</ul>
<ul>
<li>Familiarity with MLOps practices and tools such as SageMaker, MLflow, or Airflow.</li>
</ul>
<ul>
<li>Previous experience working in an Agile environment.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Python, Machine Learning, NLP, GenAI, Agentic technologies, Data processing, Experimentation, Exploration, Cloud platforms (AWS, Azure, GCP), Problem-solving skills, Communication skills, PyTorch, TensorFlow, MLOps practices and tools (SageMaker, MLflow, Airflow), Agile environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT - Artificial Intelligence</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company focuses on artificial intelligence research and development.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954012324</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b447835-74a</externalid>
      <Title>Senior DataOps Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p><strong>Your future team</strong></p>
<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>
<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>
<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>
<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>
<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>
<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p><strong>How to apply</strong></p>
<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage &amp; Querying (S3, Redshift, Athena, DuckDB), ML &amp; Model Serving (MLflow, SageMaker, deployment APIs), Cloud &amp; DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a technology company that provides a platform for hosts to manage their properties and connect with guests.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597559</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>80d15de9-aa7</externalid>
      <Title>Senior Data Scientist - Rankings &amp; Recommendations (all genders)</Title>
      <Description><![CDATA[<p>Join our Business Intelligence Department, a multidisciplinary group of Data Scientists, Analysts, and Data Engineers.</p>
<p>You will join a cross-functional Product team, Search Intelligence, which is responsible for optimizing ranking and recommendations for users visiting our website.</p>
<p>You&#39;ll be part of the broader Data Science team, which operates across cross-functional domain teams - giving you access to shared knowledge, best practices, and collaboration opportunities beyond your domain.</p>
<p>You’ll collaborate daily with Data Engineers, Analysts, Product Managers, and Back-end Engineers.</p>
<p>You’ll report to the Team Lead, Data Science.</p>
<p>Together, we turn data into actionable insights and innovative technology that powers how millions of guests find and book their perfect holiday home.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Python • Airflow • dbt • AWS (SageMaker, Redshift, Athena) • MLflow</li>
</ul>
<p>The Ranking challenge at Holidu</p>
<p>Holidu lists over 4 million vacation rental properties. Our ranking and personalization systems determine which of them our 70+ million annual users see, directly impacting search conversion and business results.</p>
<p>What&#39;s live today:</p>
<ul>
<li>Multi-stage ranking pipeline: Reinforcement-learning-based cold ranking, contextual re-ranking, and personalized recommendations.</li>
</ul>
<ul>
<li>Cold-start models for new properties with limited behavioral data.</li>
</ul>
<ul>
<li>Personalized recommendations based on user browsing patterns.</li>
</ul>
<p>Some of the hard problems we&#39;re solving:</p>
<ul>
<li>Multi-objective optimization: Balancing user relevance, conversion probability, and business value.</li>
</ul>
<ul>
<li>Personalization without history: Most users are anonymous or first-time visitors.</li>
</ul>
<ul>
<li>Cold-start: A significant share of our inventory is new each quarter. How do we surface quality properties before we have behavioral data?</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>You&#39;ll shape the ranking and recommendation systems that millions of guests rely on to find their holiday home. With access to extensive datasets and modern ML infrastructure, you&#39;ll work end-to-end - from identifying opportunities and prototyping new approaches to shipping models to production and measuring their impact.</p>
<ul>
<li>Develop high-impact models and improvements for our ranking, recommendation, and personalization systems - with the freedom to explore new, creative approaches.</li>
</ul>
<ul>
<li>Take models from conception to production, continuously monitor their performance, and iterate to enhance accuracy and efficiency.</li>
</ul>
<ul>
<li>Design and run A/B tests as a core part of ranking development; success is measured by successful experiments per quarter and time-to-decision.</li>
</ul>
<ul>
<li>Collaborate closely with Product Managers and Software Engineers to identify, prioritize, and ship ranking improvements.</li>
</ul>
<ul>
<li>Ensure model reliability in production, measured by online/offline agreement, model and data drift KPIs, latency and uptime SLAs, and automated monitoring coverage.</li>
</ul>
<ul>
<li>Advance our MLOps practices with CI/CD pipelines, retraining workflows, lineage tracking, and documentation.</li>
</ul>
<ul>
<li>Demonstrate leadership in data science projects by driving technical direction, scoping initiatives, and guiding the team&#39;s prioritization and project execution.</li>
</ul>
<p><strong>Your backpack is filled with</strong></p>
<ul>
<li>5+ years of experience as a Data Scientist, with a proven track record of applying ML models to solve real business problems.</li>
</ul>
<ul>
<li>Experience working on ranking models or recommender systems is a strong advantage.</li>
</ul>
<ul>
<li>A degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field.</li>
</ul>
<ul>
<li>Strong foundations in statistics, predictive modeling, and machine learning techniques, with hands-on experience using Python and SQL.</li>
</ul>
<ul>
<li>Experience with Airflow and dbt is a plus.</li>
</ul>
<ul>
<li>Solid understanding of business operations and the ability to translate data insights into clear, actionable outcomes.</li>
</ul>
<ul>
<li>A collaborative mindset and enthusiasm for using data to build world-class products that make a real impact.</li>
</ul>
<ul>
<li>AI Proficiency: You are comfortable using AI to enhance coding, planning, and monitoring. This includes successfully integrating AI tools (such as Claude code, Codex, Copilot, etc.) into your workflow and teaching others to use them efficiently.</li>
</ul>
<p><strong>Our adventure includes</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>
</ul>
<ul>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>
</ul>
<ul>
<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>
</ul>
<ul>
<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>
</ul>
<ul>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>
</ul>
<ul>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>
</ul>
<p>Need a sneak peek? Check out the adventure that awaits you on Instagram @lifeatholidu and dive straight into the world of Tech at Holidu for more insights!</p>
<p><strong>Want to travel with us?</strong></p>
<p>Apply online on our careers page! Your first travel contact will be Lucia from HR.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Airflow, dbt, AWS, MLflow, Machine Learning, Statistics, Predictive Modeling, SQL, AI, Data Science, Ranking Models, Recommender Systems, Collaboration, Communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a leading online marketplace for vacation rentals, listing over 4 million properties and serving 70+ million annual users.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2413808</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10290548-1ea</externalid>
      <Title>Solutions Architect - Public Sector (LEAPS)</Title>
      <Description><![CDATA[<p>As a Solutions Architect - Public Sector at Databricks, you will be part of the Field Engineering team responsible for leading the growth of the Databricks Unified Analytics Platform. The role involves working with customers, teammates, the product team, and post-sales teams to identify use cases for Databricks, develop architectures and solutions using our platform, and guide customers through implementation to accomplish value.</p>
<p>Key responsibilities include: Partnering with the sales team to help customers understand how Databricks can help solve their business problems Providing technical leadership for customers to evaluate and adopt Databricks Consulting on big data architecture, implementing proof of concepts for strategic customer projects, data science and machine learning projects, and validating integrations with cloud services and other 3rd party applications Building and presenting reference architectures, how-tos, and demo applications for customers Becoming an expert in, and promoting Databricks-inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars Traveling to customers in your region</p>
<p>We look for candidates with 5+ years of experience in a customer-facing pre-sales, technical architecture, or consulting role, with expertise in designing and architecting distributed data systems. Experience with public cloud providers such as AWS, Azure, or GCP, data engineering technologies (e.g., Spark, Hadoop, Kafka), and data warehousing (e.g., SQL, OLTP/OLAP/DSS) is also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Apache Spark, MLflow, Delta Lake, Python, Scala, Java, SQL, R, AWS, Azure, GCP, Data Engineering, Data Warehousing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified analytics platform for data engineering, data analytics, and data science and machine learning.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8320126002</Applyto>
      <Location>Maryland; Virginia; Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>023e0d6c-5a8</externalid>
      <Title>Geo Core Account Executive - Oil, Gas &amp; Energy</Title>
      <Description><![CDATA[<p>As an Enterprise Account Executive on our Oil, Gas and Energy enterprise sales team, you will be responsible for selling Databricks&#39; enterprise cloud data platform powered by Apache Spark to large-scale industrial clients.</p>
<p>You will present a territory plan within the first 90 days, meet with CIOs, IT executives, LOB executives, Program Managers, and other important partners, and close both new accounts and existing accounts.</p>
<p>To succeed in this role, you will need to have previously worked in an early-stage company and have experience in field sales within big data, Cloud, and SaaS sales.</p>
<p>You will also need to have prior customer relationships with CIOs, program managers, and essential decision-makers, and be able to simply articulate intricate cloud technologies.</p>
<p>The pay range for this role is $220,100-$302,600 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,100-$302,600 USD</Salaryrange>
      <Skills>Enterprise sales, Cloud sales, Big data sales, SaaS sales, Apache Spark, Lakehouse, Delta Lake, MLflow, Prior customer relationships with CIOs, Program managers, Essential decision-makers</Skills>
      <Category>Sales</Category>
      <Industry>Energy</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8439679002</Applyto>
      <Location>Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a5be03ca-ea6</externalid>
      <Title>Named Core Account Executive - Industrial</Title>
      <Description><![CDATA[<p>As a Named Core Account Executive - Industrial at Databricks, you will be responsible for managing a small set of clients in our Industrial subvertical. You will come with an informed point of view on Big Data, Advanced Analytics, and AI which will help to guide your successful execution strategy and allow you to provide genuine value to the client.</p>
<p>Your responsibilities will include building relationships with CIOs, IT executives, LOB executives, Program Managers, and other important partners. You will drive value-based growth within the account, expand the Databricks footprint into new business units and use cases, and exceed activity, pipeline, and revenue targets.</p>
<p>To succeed in this role, you will need to have previously excelled in an early-stage company, have previous field sales experience within big data, Cloud, SaaS, and a consumption selling motion, and have prior customer relationships with CIOs, program managers, and essential decision makers at local accounts.</p>
<p>The pay range for this role is $272,000-$374,000 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$272,000-$374,000 USD</Salaryrange>
      <Skills>Big Data, Advanced Analytics, AI, Cloud, SaaS, Sales, Apache Spark, Delta Lake, MLflow</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified data analytics and AI platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8439683002</Applyto>
      <Location>Northeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d46741a-b4c</externalid>
      <Title>Senior Systems Engineer, OS Automation</Title>
      <Description><![CDATA[<p>CoreWeave is looking for a Senior Systems Engineer who is ready to evolve beyond traditional DevOps. You will start by stabilizing and scaling our Linux OS and Kernel build pipelines. Once the foundation is set, you will lead the transition to AI-native infrastructure, building &#39;smart&#39; workflows that don&#39;t just report errors, but understand and fix them.</p>
<p>You are a Systems Engineer at heart, but you are ready to apply LLMs, RAG, and predictive modeling to solve infrastructure challenges at scale.</p>
<p>Our Team&#39;s Stack:</p>
<ul>
<li>Languages: Python, Go, bash/sh</li>
<li>Observability: Prometheus, Victoria Metrics, Grafana</li>
<li>OS &amp; Kernel: Linux Kernel (custom build), Ubuntu</li>
<li>Hardware: Intel/AMD/ARM CPUs, Nvidia GPUs, DPUs, Infiniband and Ethernet NICs</li>
<li>Containerization: Docker, Kubernetes (k8s), KubeVirt, containerd, kubelet</li>
</ul>
<p>Responsibilities:</p>
<ul>
<li>Pipeline Architecture: Design, maintain, and automate reproducible OS image build pipelines for our massive fleet of GPU-accelerated servers.</li>
<li>Kernel Distribution: Collaborate with kernel engineers to package, validate, and distribute custom Linux builds across Intel, AMD, and ARM architectures.</li>
<li>Dependency Management: Build tooling to manage dependencies, versioning, and release workflows, ensuring hermetic builds.</li>
<li>Telemetry &amp; Metrics: Standardize the collection of build metrics to create a baseline for future AI modeling.</li>
<li>&#39;Smart&#39; CI/CD &amp; Auto-Remediation: Architect AI agents that ingest and analyze build logs in real-time. Develop systems that auto-triage errors, categorize failure patterns, and generate context-aware fix suggestions for engineering teams.</li>
<li>Predictive Regression Modeling: Design ML workflows that utilize historical performance data to detect kernel and OS regressions (latency, throughput, stability) in staging environments before they impact production.</li>
<li>Dynamic Kernel Tuning: Implement closed-loop feedback systems that analyze real-time system metrics and automatically suggest or apply sysctl parameter optimizations for specific customer workloads.</li>
<li>Next-Gen ChatOps: Engineer LLM-driven interfaces for Slack/internal tools, enabling stakeholders to query build statuses, request log summaries, or provision resources using natural language commands.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>4+ years of professional experience in Linux Systems Engineering, Release Engineering, or DevOps.</li>
<li>Deep knowledge of Linux internals (boot process, kernel modules, networking stack).</li>
<li>Experience with package management (Debian/Ubuntu) and build systems.</li>
<li>Strong proficiency in Python (essential for the AI integration aspects of this role).</li>
<li>Demonstrable experience integrating API-based AI models (OpenAI, Anthropic, or local open-source models) into software workflows.</li>
<li>Understanding of RAG (Retrieval-Augmented Generation) architectures for querying technical documentation or logs.</li>
<li>Experience building event-driven automation (e.g., using webhooks to trigger analysis agents).</li>
<li>Familiarity with data structures required for vector search or time-series analysis.</li>
</ul>
<p>Nice-to-haves:</p>
<ul>
<li>Experience with Kubeflow or MLFlow.</li>
<li>Background in High-Performance Computing (HPC).</li>
<li>Experience fine-tuning small language models (SLMs) for code or log analysis tasks.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$153,000 to $242,000</Salaryrange>
      <Skills>Linux Systems Engineering, Release Engineering, DevOps, Python, API-based AI models, RAG (Retrieval-Augmented Generation), Event-driven automation, Vector search, Time-series analysis, Kubeflow, MLFlow, High-Performance Computing (HPC), Small language models (SLMs)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing platform that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4396057006</Applyto>
      <Location>Livingston, NJ / New York City, NY/ Sunnyvale, CA/ Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1d94b9cf-773</externalid>
      <Title>Machine Learning Intern Fall 2026 (Toronto)</Title>
      <Description><![CDATA[<p>About the Role</p>
<p>We&#39;re looking for a Machine Learning Intern to join our team in Toronto. As a Machine Learning Intern, you will work on tackling new challenges in machine learning and artificial intelligence. You will join our engineering teams as we maneuver through exponential growth and massive scale while building awesome products and features, creating visually rich experiences, spearheading the discovery problem, and pinpointing tomorrow&#39;s engineering challenges.</p>
<p>Responsibilities</p>
<ul>
<li>Lead your own project start to finish to contribute in cutting-edge research in machine learning and artificial intelligence that can be applied to Pinterest problems</li>
<li>Collect, analyze, and synthesize findings from data and build intelligent data-driven models</li>
<li>Write clean, efficient, and sustainable code</li>
<li>Use machine learning, natural language processing, and graph analysis to solve modeling and ranking problems across discovery, ads and search</li>
<li>Scope and independently solve moderately complex problems</li>
<li>Demonstrate accountability for the quality and completion of your tasks and projects, collaborating with your team and seeking guidance as needed</li>
</ul>
<p>Requirements</p>
<ul>
<li>Working towards a Master&#39;s or PhD degree in Computer Science, ML, NLP, Statistics, Information Sciences or related field</li>
<li>Machine Learning (ranking, computer vision, NLP, content recommendations, embedding, information retrieval etc)</li>
<li>Experience with big data technologies (e.g., Hadoop/Spark) and scalable realtime systems that process stream data</li>
<li>Strong interest in research and applying machine learning and AI to drive meaningful product innovation and user impact</li>
<li>Exposure to ML, AI, data analytics, statistics, or related technical fields, through research, coursework, projects, or internships</li>
<li>Proficiency in at least one systems language (Java, C++, Python) or one ML framework (Tensorflow, Pytorch, MLFlow)</li>
<li>Experience in research and in solving analytical problems</li>
<li>Strong communicator and team player with the ability to find solutions for open-ended problems</li>
</ul>
<p>Why Intern at Pinterest?</p>
<ul>
<li>Meaningful Work: Contribute to projects that impact millions of users worldwide.</li>
<li>Mentorship: Learn from and be guided by experienced engineers and researchers in the field.</li>
<li>Growth and Development: Participate in professional development workshops and networking events to build your skills and connections.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$6,000 - $9,500 CAD monthly</Salaryrange>
      <Skills>Machine Learning, Artificial Intelligence, Python, Java, C++, Hadoop, Spark, Tensorflow, Pytorch, MLFlow, Natural Language Processing, Graph Analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to save and share images and videos. It has over 550 million users worldwide.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7268778</Applyto>
      <Location>Toronto, ON, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1ccc8c6-f09</externalid>
      <Title>Geo Hunter Account Executive, Manufacturing &amp; High-Tech</Title>
      <Description><![CDATA[<p>As a Geo Hunter Enterprise Account Executive at Databricks, you will be responsible for selling into and activating Large Manufacturing accounts. You will be a strategic sales professional with experience in selling innovation and change through customer vision expansion. Your goal will be to guide deals forward to compress decision cycles and close exciting deals. We offer accelerators above 100% quota attainment.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Meeting with CIOs, IT executives, LOB executives, Program Managers, and other important partners</li>
<li>Closing both new accounts and existing accounts</li>
<li>Identifying and closing quick, small wins while managing longer, complex sales cycles</li>
<li>Exceeding activity, pipeline, and revenue targets</li>
<li>Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>
<li>Using a solution-based approach to selling and creating value for customers</li>
<li>Promoting Databricks&#39; enterprise cloud data platform powered by Apache Spark</li>
<li>Ensuring 100% satisfaction among all customers</li>
<li>Prioritizing opportunities and applying appropriate resources</li>
<li>Building a plan for success internally at Databricks and externally with your accounts</li>
</ul>
<p>We are looking for someone with:</p>
<ul>
<li>Previous experience in an early-stage company and knowledge of how to navigate and be successful</li>
<li>Field sales experience within big data, Cloud, or SaaS sales</li>
<li>Experience managing large, complex Manufacturing accounts is preferred</li>
<li>Prior customer relationships with CIOs, program managers, and essential decision makers</li>
<li>Ability to simply articulate intricate cloud technologies</li>
<li>5+ years experience exceeding sales quotas</li>
<li>Success closing new accounts while working existing accounts</li>
<li>Understanding of Spark and big data preferable</li>
<li>Passion for cloud technologies</li>
<li>Bachelor&#39;s Degree</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$167,100-$229,800 USD</Salaryrange>
      <Skills>big data, Cloud, SaaS sales, sales quotas, Spark, Apache Spark, Delta Lake, MLflow, cloud technologies, customer vision expansion, solution-based approach, customer satisfaction</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8193347002</Applyto>
      <Location>Northeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>002cbb3c-9d8</externalid>
      <Title>Senior Software Engineer- Tokyo</Title>
      <Description><![CDATA[<p>As a Sr. Software Engineer on the AI OSS Ecosystem Team, you will play a key role in building and maintaining our open-source AI/ML platforms to enable users to train, deploy and monitor models and GenAI agents at scale.</p>
<p>Your responsibilities will include designing and implementing platform capabilities to support the AI/ML development and productionization lifecycle, including training, evaluation, deployment, monitoring, and management of models and agents.</p>
<p>You will also design and implement platform integrations with various frameworks in the AI/ML ecosystem, collaborate with the AI/ML community across the world to advance the state-of-the-art in AIOps, and ensure the latest AI/ML tooling advancements are available to Databricks&#39; customers.</p>
<p>Additionally, you will mentor and guide junior engineers on the team by helping with project planning, technical decisions, and code and document review.</p>
<p>We are looking for a highly skilled and experienced software engineer with a strong background in AI/ML and a passion for building and maintaining open-source platforms.</p>
<p>The ideal candidate will have a BS (or higher) in Computer Science, or a related field, and 5+ years of hands-on experience in building production systems using at least one of the following programming languages: Python (Preferred), Scala and Java.</p>
<p>Experience building and maintaining software tools and frameworks for AI/ML, ideally in an open-source environment, is also required.</p>
<p>Familiarity with AI/ML and AIOps concepts and technologies, such as model training, deployment, and monitoring, is essential.</p>
<p>A deep understanding and experience in working with agent frameworks such as LangChain, LlamaIndex, DSPy, or other similar projects is preferred.</p>
<p>Significant contributions to open-source projects in the AI/ML domain, such as SparkML, TensorFlow, PyTorch, MLflow, or other similar projects, are also preferred.</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees.</p>
<p>For specific details on the benefits offered in your region, please click here.</p>
<p>We are committed to diversity and inclusion and welcome applications from candidates of all backgrounds.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Scala, Java, AI/ML, AIOps, model training, deployment, monitoring, LangChain, LlamaIndex, DSPy, SparkML, TensorFlow, PyTorch, MLflow</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform to its customers.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8350959002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ed89fde9-362</externalid>
      <Title>Software Engineer- Fullstack- Singapore</Title>
      <Description><![CDATA[<p>We are seeking a Software Engineer to join our AI OSS Ecosystem Team. As a member of this team, you will play a key role in building and maintaining our open-source AI/ML platforms to enable users to train, deploy and monitor models and GenAI agents at scale.</p>
<p>The impact you&#39;ll have:</p>
<ul>
<li>Design and implement platform capabilities to support the AI/ML development and productionization lifecycle including training, evaluation, deployment, monitoring, and management of models and agents</li>
<li>Design and implement platform integrations with various frameworks in the AI/ML ecosystem</li>
<li>Collaborate with the AI/ML community across the world to advance the state-of-the-art in AIOps</li>
<li>Ensure the latest AI/ML tooling advancements are available to Databricks&#39; customers, thereby enabling organizations around the world to get more value from their data</li>
<li>Mentor and guide junior engineers on the team by helping with project planning, technical decisions, and code and document review</li>
</ul>
<p>What we look for:</p>
<ul>
<li>BS (or higher) in Computer Science, or a related field</li>
<li>3+ years of hands-on experience in building production systems using at least one of the following programming languages: Python (Preferred), Scala and Java</li>
<li>Experience building and maintaining software tools and frameworks for AI/ML, ideally in an open-source environment</li>
<li>Familiarity with AI/ML and AIOps concepts and technologies, such as model training, deployment, and monitoring</li>
<li>Deep understanding and experience in working with agent frameworks such as LangChain, LlamaIndex, DSPy, or other similar projects</li>
<li>Significant contributions to open-source projects in the AI/ML domain, such as SparkML, TensorFlow, PyTorch, MLflow, or other similar projects</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Scala, Java, AI/ML, AIOps, model training, deployment, monitoring, LangChain, LlamaIndex, DSPy, SparkML, TensorFlow, PyTorch, MLflow</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform to its customers.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8341810002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b00b781c-eba</externalid>
      <Title>Senior Software Engineer - Database Engine Internals</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Software Engineer to join our team in designing next-generation systems for database engine internals. As part of this multi-year journey, you&#39;ll drive requirements clarity and design decisions for ambiguous problems. Your responsibilities will include producing technical design documents and project plans, developing new features, mentoring junior engineers, testing and rolling out to production, and monitoring.</p>
<p>Our ideal candidate has a passion for database systems, storage systems, distributed systems, language design, or performance optimisation. They should be comfortable working towards a multi-year vision with incremental deliverables and be customer-oriented with a focus on having an impact. A minimum of 5 years of experience working in a related system is required, with a PhD in databases or distributed systems being optional.</p>
<p>In return, we offer a comprehensive benefits package and a commitment to diversity and inclusion. If you&#39;re excited about the opportunity to join our team and contribute to the development of next-generation database systems, please apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>database systems, storage systems, distributed systems, language design, performance optimisation, Apache Spark, Delta Lake, MLflow</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company with over 10,000 organisations worldwide relying on its Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8012809002</Applyto>
      <Location>Belgrade, Serbia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6bd506fa-79a</externalid>
      <Title>Strategic Enterprise Account Executive (Digital Natives) | Eastern EMEA</Title>
      <Description><![CDATA[<p>Do you want to solve the world&#39;s toughest problems using the power of Data and AI? At Databricks, that is our daily reality. We are the pioneers of the Data Lakehouse, and we are looking for a world-class Strategic Enterprise Account Executive to join our Eastern EMEA team.</p>
<p>Your mission is high-stakes: you will own and scale one of our most significant Strategic Scaleups (Digital Natives) in the region. This isn&#39;t just a sales role; it is a partnership with a global unicorn that has transitioned into a massive enterprise. You will guide them through the next frontier of AI transformation.</p>
<p><strong>The Impact You Will Have</strong></p>
<ul>
<li>Architect the Strategy: Co-author a multi-year business plan with your team and ecosystem partners to exceed quarterly booking goals and accelerate customer usage.</li>
<li>Master the Use Case: Lead a &#39;Special Forces&#39; team of technical experts and partners to identify high-impact Big Data and AI use cases, proving the undeniable value of the Databricks Platform.</li>
<li>Drive Transformation: Execute your customer&#39;s AI roadmap through a blend of strategic partnerships, expert professional services, and high-level Executive Engagement.</li>
<li>Build Technical Trust: Develop a deep understanding of our product roadmap to become a trusted advisor to both C-level visionaries and technical champions.</li>
</ul>
<p><strong>What We Look For</strong></p>
<ul>
<li>The &#39;Unicorn&#39; Expert: Proven experience building deep, influential relationships with large, global &#39;mature unicorns.&#39; You understand the high-velocity, high-complexity culture of Digital Natives.</li>
<li>Industry Pedigree: Deep roots in the Big Data, Cloud, or SaaS sectors. You don&#39;t just know the buzzwords; you understand the architecture.</li>
<li>A Track Record of Winning: Consistent history of over-achieving quotas at high-growth Enterprise software companies.</li>
<li>Consumption Model Mastery: Experience driving usage-based and &#39;commit-and-expand&#39; engagement models.</li>
<li>Ecosystem Orchestrator: Skilled in co-selling with Cloud Giants (AWS, Azure, GCP) and Global Systems Integrators (GSIs).</li>
<li>Value-Based Seller: Expert at building data-driven business cases that secure immediate buy-in from C-level executives.</li>
<li>Language: Professional proficiency in English</li>
</ul>
<p><strong>Benefits</strong></p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data, Cloud, SaaS, Data Lakehouse, Apache Spark, Delta Lake, MLflow, AI, Machine Learning</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8349751002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7e58a91c-29e</externalid>
      <Title>Strategic Alliance Operations Director</Title>
      <Description><![CDATA[<p>The Strategic Alliance Operations Director will lead the Accenture Databricks Business Group (ADBG) and build the operating model that powers one of Databricks&#39; most strategic global partnerships.</p>
<p>In this role, you will own the governance, portfolio management, and execution cadence for the ADBG, while also overseeing a focused strategic investment program that accelerates joint innovation with Accenture.</p>
<p>As the Strategic Alliance Operations Director, your primary mission is to design, launch, and scale the global operating framework that ensures Databricks and Accenture execute consistently, predictably, and with clear accountability across all joint initiatives.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Establishing, operationalizing, and programmatizing the Accenture BG, including governance structures, operating cadence, and delivery standards for all joint Databricks-Accenture initiatives.</li>
</ul>
<ul>
<li>Maintaining an end-to-end, integrated portfolio view covering pipeline, active programs, dependencies, financial performance, delivery health, and risks.</li>
</ul>
<ul>
<li>Leading executive and operational cadences (QBRs, steering committees, exec reviews) to drive decisions, resolve escalations, and ensure alignment across Databricks and Accenture stakeholders.</li>
</ul>
<ul>
<li>Developing and maintaining KPI dashboards and reporting that provide clear visibility into revenue, utilization, program status, delivery quality, and customer outcomes.</li>
</ul>
<ul>
<li>Defining and rolling out standardized tools, templates, and best practices for planning, tracking, governance, and reporting, enabling consistent, scalable execution across regions and industries.</li>
</ul>
<ul>
<li>Working cross-functionally (C&amp;SI, ISV Ecosystem, GTM Strategy &amp; Ops, Business Strategy &amp; Ops, Industry, Legal, Finance, Product) to define a clear strategic investment framework that dovetails with PMO governance, including pillars, guardrails, and decision criteria.</li>
</ul>
<ul>
<li>Ensuring that all investments are aligned with Databricks&#39; evolving strategic goals and the broader Accenture BG portfolio, structured with clear milestones, owners, and success metrics that plug into PMO tracking and reporting, operationalized through existing PMO processes, and fully auditable with documented decisions and consolidated reporting via PMO dashboards.</li>
</ul>
<ul>
<li>Partnering with Accenture leadership and field teams to source, qualify, and shape investment opportunities, using standardized proposal formats that enable apples-to-apples evaluation by Databricks leadership.</li>
</ul>
<p>We look for an experienced Alliance Operations leader who is also a strategic builder: someone who anchors on operational excellence but is energized by shaping new, high-impact investment mechanisms within that structure.</p>
<p>Key qualifications include:</p>
<ul>
<li>Strong PMO and portfolio management leadership skills, with a track record of establishing governance and operating models in complex, multi-stakeholder environments.</li>
</ul>
<ul>
<li>Highly effective at building and leading cross-functional virtual teams across GTM, Finance, Product, Legal, and Partner organizations.</li>
</ul>
<ul>
<li>Exceptional ability to design and orchestrate processes, drive consensus across organizations, and resolve impasses while keeping execution on track.</li>
</ul>
<ul>
<li>Skilled negotiator with experience structuring and closing binding partner agreements that align strategic goals, risk, and return.</li>
</ul>
<ul>
<li>Strategic, analytical mindset with a strong bias for action; able to move quickly while maintaining rigor, transparency, and auditability.</li>
</ul>
<ul>
<li>15+ years of experience in program/portfolio management, partner operations, or strategic investments in hyper-growth or large-scale technology environments.</li>
</ul>
<ul>
<li>Demonstrated success standing up and leading PMO or portfolio functions for complex, global, multi-stakeholder initiatives.</li>
</ul>
<ul>
<li>Proven track record forming and leading cross-functional v-teams (GTM, Finance, Product, ISV/partner, Legal, Operations).</li>
</ul>
<ul>
<li>Experience negotiating and closing binding contracts with partners, including GSIs, RSIs, and ISVs.</li>
</ul>
<ul>
<li>Strong technical understanding of the Databricks product portfolio and modern cloud/data architectures.</li>
</ul>
<p>Pay Range Transparency: Databricks is committed to fair and equitable compensation practices. The pay range for this role is $143,700-$197,550 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$143,700-$197,550 USD</Salaryrange>
      <Skills>Portfolio Management, Governance, Process Design, Cross-Functional Leadership, Negotiation, Strategic Planning, Analytical Thinking, Program Management, Partner Operations, Strategic Investments, Apache Spark, Delta Lake, MLflow, Cloud/Data Architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8439170002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1e0df4a3-dc0</externalid>
      <Title>Strategic Enterprise Account Executive - Life Sciences</Title>
      <Description><![CDATA[<p>As a Strategic Enterprise Account Executive at Databricks, you will be responsible for maintaining and growing a single existing account in the life sciences industry. You will work closely with CIOs, IT executives, LOB executives, Program Managers, and other important partners to identify and close quick, small wins while managing longer, complex sales cycles.</p>
<p>Your key responsibilities will include:</p>
<ul>
<li>Meeting with CIOs, IT executives, LOB executives, Program Managers, and other important partners</li>
<li>Closing both new accounts and existing accounts</li>
<li>Identifying and closing quick, small wins while managing longer, complex sales cycles</li>
<li>Exceeding activity, pipeline, and revenue targets</li>
<li>Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>
<li>Using a solution-based approach to selling and creating value for customers</li>
<li>Promoting Databricks&#39; enterprise cloud data platform powered by Apache Spark</li>
<li>Ensuring 100% satisfaction among all customers</li>
<li>Prioritizing opportunities and applying appropriate resources</li>
<li>Building a plan for success internally at Databricks and externally with your accounts</li>
</ul>
<p>To succeed in this role, you will need to have:</p>
<ul>
<li>7+ years of experience exceeding sales quotas</li>
<li>Field sales experience within big data, Cloud, or SaaS sales</li>
<li>Experience managing large, complex Life Sciences accounts is preferred</li>
<li>Prior customer relationships with CIOs, program managers, and essential decision makers</li>
<li>The ability to simply articulate intricate cloud technologies</li>
<li>A passion for cloud technologies</li>
<li>A Bachelor&#39;s Degree</li>
</ul>
<p>In addition to a competitive salary, you will also be eligible for annual performance bonus, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$272,000-$374,000 USD</Salaryrange>
      <Skills>field sales experience, big data, Cloud, SaaS sales, sales quotas, customer relationships, cloud technologies, Apache Spark, Delta Lake, MLflow</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company, founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8359439002</Applyto>
      <Location>Remote - California; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f08d7a20-ff7</externalid>
      <Title>Strategic Core Account Executive</Title>
      <Description><![CDATA[<p>As a Strategic Enterprise Account Executive at Databricks, you will be responsible for selling into CMEG accounts specific to Gaming/Betting. You will need to understand the product in depth and communicate its value to customers and system integrators. Your goal will be to close deals and exceed activity, pipeline, and revenue targets.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Meeting with CIOs, IT executives, LOB executives, program managers, and other important partners</li>
<li>Closing both new accounts and existing accounts</li>
<li>Identifying and closing quick, small wins while managing longer, complex sales cycles</li>
<li>Exceeding activity, pipeline, and revenue targets</li>
<li>Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>
<li>Using a solution-based approach to selling and creating value for customers</li>
<li>Promoting Databricks&#39; enterprise cloud data platform powered by Apache Spark™</li>
<li>Ensuring 100% satisfaction among all customers</li>
<li>Prioritising opportunities and applying appropriate resources</li>
<li>Building a plan for success internally at Databricks and externally with your accounts</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Previous experience in an early-stage company</li>
<li>Field sales experience within big data, Cloud, or SaaS sales</li>
<li>Prior customer relationships with CIOs, program managers, and essential decision makers</li>
<li>Ability to simply articulate intricate cloud technologies</li>
<li>7+ years experience exceeding sales quotas</li>
<li>Expertise with financial services institutions preferable</li>
<li>Success closing new accounts while working existing accounts</li>
<li>Understanding of Spark and big data preferable</li>
<li>Passion for cloud technologies</li>
<li>Bachelor&#39;s Degree</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud technologies, Big data, Sales, Customer relationship management, Solution-based selling, Apache Spark, Delta Lake, MLflow, Financial services institutions, Gaming/Betting industry</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform. It has over 10,000 organisations worldwide as clients.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8477727002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>870b07b0-501</externalid>
      <Title>Partner Solutions Architect</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect (PSA) for India, you will work with the technical and sales team members who work directly with our customers. You will develop &#39;technical champions&#39; within our top Partners, providing enablement on technical matters related to the Databricks product.</p>
<p>Working with our partners, you will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>
<p>As a member of our team, you will exercise and develop expertise in those areas, using open-source projects such as Apache Spark, MLflow, and Delta Lake; and major public cloud infrastructure and services.</p>
<p>The impact you will have:</p>
<ul>
<li>Provide partners with the level of enablement they need to assist their clients in evaluating and adopting Databricks, including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>
</ul>
<ul>
<li>Engage with the partner technical community by leading workshops, seminars, and meet-ups</li>
</ul>
<ul>
<li>You will be a Big Data Analytics expert on aspects of architecture and design and will share this with our partner network</li>
</ul>
<ul>
<li>Show expertise by producing creative technical solutions and blog posts</li>
</ul>
<p>What we look for:</p>
<ul>
<li>8 years of customer-facing experience working with external clients or partners across a variety of industry markets</li>
</ul>
<ul>
<li>Core strength in either data engineering or data science</li>
</ul>
<ul>
<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>
</ul>
<ul>
<li>Experience developing architectures within a public cloud (AWS, Azure, or GCP)</li>
</ul>
<ul>
<li>Hands-on experience in SQL, Python, Scala, or Java</li>
</ul>
<ul>
<li>Expertise in at least one of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (Ex: Apache Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Warehousing (Ex: SQL, OLTP/OLAP/DSS)</li>
</ul>
<ul>
<li>Data Science and Machine Learning technologies (Ex: pandas, scikit-learn, HPO)</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organisations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>
<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, MLflow, Delta Lake, SQL, Python, Scala, Java, Data Engineering technologies, Data Warehousing, Data Science and Machine Learning technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organisations worldwide rely on Databricks.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8439182002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>027d391b-e83</externalid>
      <Title>Partner Solutions Architect</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect, you will work with Databricks&#39; Consulting and System Integrator (C&amp;SI) partners, teammates, and with the technical and sales team members who work directly with our customers.</p>
<p>You will develop &#39;technical champions&#39; within our top C&amp;SI Partners, providing enablement on technical matters related to the Databricks product. Working with our partners, you will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Data Intelligence Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>
<p>As a member of our team, you will exercise and develop expertise in those areas, using open-source projects such as Apache Spark™, MLflow, and Delta Lake; and major public cloud infrastructure and services. You will use this expertise to become a trusted advisor to C&amp;SI partners.</p>
<p>The impact you will have:</p>
<ul>
<li>Provide partners with the level of enablement they need to assist their clients in evaluating and adopting Databricks including hands-on Apache Spark™ programming and integration with the wider cloud ecosystem</li>
</ul>
<ul>
<li>Engage with the partner technical community by leading workshops, seminars, and meet-ups</li>
</ul>
<ul>
<li>You will be a Big Data Analytics expert on aspects of architecture and design and will share this with our partner network</li>
</ul>
<ul>
<li>Show expertise by producing creative technical solutions and blog posts</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of pre-sales or post-sales experience working with external clients or partners across a variety of industry markets</li>
</ul>
<ul>
<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either data engineering or data science</li>
</ul>
<ul>
<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>
</ul>
<ul>
<li>Experience developing architectures within a public cloud (AWS, Azure, or GCP)</li>
</ul>
<ul>
<li>Coding experience in SQL, Python, Scala, or Java</li>
</ul>
<ul>
<li>Expertise in at least one of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (Ex: Apache Spark™, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Warehousing (Ex: SQL, OLTP/OLAP/DSS)</li>
</ul>
<ul>
<li>Data Science and Machine Learning technologies (Ex: pandas, scikit-learn, HPO)</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
</ul>
<ul>
<li>Native-level fluency in Korean and professional working proficiency in English (both written and verbal)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, MLflow, Delta Lake, public cloud infrastructure and services, data engineering, data science, SQL, Python, Scala, Java</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform. It has over 10,000 organisations worldwide as customers.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8449860002</Applyto>
      <Location>Seoul, South Korea</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2b92459c-038</externalid>
      <Title>Partner Solutions Architect</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect, you will work with Databricks&#39; Consulting and System Integrator (C&amp;SI) partners, teammates, and with the technical and sales team members who work directly with our customers.</p>
<p>You will develop &#39;technical champions&#39; within our top C&amp;SI Partners, providing enablement on technical matters related to the Databricks product. Working with our partners, you will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Data Intelligence Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>
<p>As a member of our team, you will exercise and develop expertise in those areas, using open-source projects such as Apache Spark, MLflow, and Delta Lake; and major public cloud infrastructure and services.</p>
<p>The impact you will have:</p>
<ul>
<li>Provide partners with the level of enablement they need to assist their clients in evaluating and adopting Databricks, including hands-on Apache Spark programming and integration with the wider cloud ecosystem.</li>
<li>Engage with the partner technical community by leading workshops, seminars, and meet-ups.</li>
<li>You will be a Big Data Analytics expert on aspects of architecture and design and will share this with our partner network.</li>
<li>Show expertise by producing creative technical solutions and blog posts.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of pre-sales or post-sales experience working with external clients or partners across a variety of industry markets.</li>
<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either data engineering or data science.</li>
<li>Experience demonstrating technical concepts, including presenting and whiteboarding.</li>
<li>Experience developing architectures within a public cloud (AWS, Azure, or GCP).</li>
<li>Coding experience in SQL, Python, Scala, or Java.</li>
<li>Expertise in at least one of the following:</li>
<li>Data Engineering technologies (Ex: Apache Spark, Hadoop, Kafka)</li>
<li>Data Warehousing (Ex: SQL, OLTP/OLAP/DSS)</li>
<li>Data Science and Machine Learning technologies (Ex: pandas, scikit-learn, HPO)</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.</li>
<li>Written and verbal level fluency in Japanese and English.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, MLflow, Delta Lake, AWS, Azure, GCP, SQL, Python, Scala, Java, Data Engineering, Data Warehousing, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organizations worldwide rely on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8347880002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c77545f4-627</externalid>
      <Title>Staff Machine Learning Scientist</Title>
      <Description><![CDATA[<p>We are seeking a Staff Machine Learning Scientist to help grow the Machine Learning Science team, within the Computational Science department. The ideal candidate has a strong knowledge of artificial intelligence (AI), including machine learning (ML) fundamentals and extensive experience with deep learning (DL) methods, a track record of successfully using these methods to answer complex research questions, the ability to drive independent research and thrive in a highly cross-functional environment.</p>
<p>They will be responsible for the development of algorithms for early, blood-based detection tests for cancer. They will build on a foundation of ML/DL and statistical skills to develop models for identifying molecular signals from blood. They will also work with computational biologists, molecular biologists and ML engineers to design and drive research experiments, and will have a significant impact on the continued growth of an organisation dedicated to changing the entire landscape of cancer.</p>
<p>The role reports to the Director, Machine Learning Science. This role can be a Hybrid role based in our Brisbane, California headquarters (2-3 days per week in office), or remote.</p>
<p>Responsibilities:</p>
<ul>
<li>Independently pursue cutting-edge research in AI applied to biological problems (including cancer research, genomics, computational biology, immunology, etc.)</li>
<li>Build new models or fine-tune existing models to identify biological changes resulting from disease</li>
<li>Build models that achieve high accuracy and that generalise robustly to new data</li>
<li>Apply contemporary interpretability techniques to provide a deeper understanding of the underlying signal identified by the model, ideally suggesting potential biological mechanisms</li>
<li>Work closely with ML Engineering partners to ensure that Freenome&#39;s computational infrastructure supports optimal model training and iteration</li>
<li>Take a mindful, transparent, and humane approach to your work</li>
</ul>
<p>Requirements:</p>
<ul>
<li>PhD or equivalent research experience with an AI emphasis and in a relevant, quantitative field such as Computer Science, Statistics, Mathematics, Engineering, Computational Biology, or Bioinformatics</li>
<li>6+ years of post-doc or post-PhD industry experience achieving impactful results using relevant modelling techniques</li>
<li>Expertise demonstrated by research publications or industry achievements, in driving independent research in applied machine learning, deep learning and complex data modelling</li>
<li>Practical and theoretical understanding of fundamental ML models like generalised linear models, kernel machines, decision trees and forests, neural networks, boosting and model aggregation</li>
<li>Practical and theoretical understanding of DL models like large language models or other foundation models</li>
<li>Extensive experience with training paradigms like supervised learning, self-supervised learning, and contrastive learning</li>
<li>Proficient in current state of the art in ML/DL approaches in different domains, with an ability to envision their applications in biological data</li>
<li>Proficiency in a general-purpose programming language: Python, R, Java, C, C++, etc.</li>
<li>Proficiency in one or more ML frameworks such as; PyTorch, TensorFlow and JAX; and ML platforms like Hugging Face</li>
<li>Experience in ML analysis and developer tools like TensorBoard, MLflow or Weights &amp; Biases</li>
<li>Excellent ability to communicate across disciplines, work collaboratively, and make progress in smaller steps via experimental iterations</li>
<li>Proficient at productive cross-functional scientific communication and collaboration with software engineers and computational biologists</li>
<li>A passion for innovation and demonstrated initiative in tackling new areas of research</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Deep domain-specific experience in computational biology, genomics, proteomics or a related field</li>
<li>Experience in building DL models for genomic data, with knowledge of state-of-the-art DNA foundation models</li>
<li>Experience in NGS data analysis and bioinformatic pipelines</li>
<li>Experience with containerized cloud computing environments such as Docker in GCP, Azure, or AWS</li>
<li>Experience in a production software engineering environment, including the use of automated regression testing, version control, and deployment systems</li>
</ul>
<p>Benefits and additional information:</p>
<ul>
<li>The US target range of our base salary for new hires is $199,675.00 - $283,500.00. You will also be eligible to receive equity, cash bonuses, and a full range of medical, financial, and other benefits depending on the position offered. Please note that individual total compensation for this position will be determined at the Company&#39;s sole discretion and may vary based on several factors, including but not limited to, location, skill level, years and depth of relevant experience, and education.</li>
<li>Freenome is proud to be an equal-opportunity employer, and we value diversity. Freenome does not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.</li>
<li>Applicants have rights under Federal Employment Laws.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$199,675.00 - $283,500.00</Salaryrange>
      <Skills>Artificial Intelligence, Machine Learning, Deep Learning, Computational Biology, Genomics, Immunology, Python, R, Java, C, C++, PyTorch, TensorFlow, JAX, Hugging Face, TensorBoard, MLflow, Weights &amp; Biases</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Freenome</Employername>
      <Employerlogo>https://logos.yubhub.co/freenome.com.png</Employerlogo>
      <Employerdescription>Freenome is a biotechnology company developing a blood-based test for cancer detection.</Employerdescription>
      <Employerwebsite>https://freenome.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/freenome/jobs/8215797002</Applyto>
      <Location>Brisbane, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>faec8dc3-4d3</externalid>
      <Title>Senior Machine Learning Scientist</Title>
      <Description><![CDATA[<p>We are seeking a Senior Machine Learning Scientist to help grow the Machine Learning Science team. The ideal candidate has a strong knowledge of artificial intelligence (AI), including machine learning (ML) fundamentals and extensive experience with deep learning (DL) methods. They will be responsible for the development of algorithms for early, blood-based detection tests for cancer. They will build on a foundation of ML/DL and statistical skills to develop models for identifying molecular signals from blood. They will also work with computational biologists, molecular biologists and ML engineers to design and drive research experiments, and will have a significant impact on the continued growth of an organisation dedicated to changing the entire landscape of cancer.</p>
<p>The role reports to the Director, Machine Learning Science. This role can be a Hybrid role based in our Brisbane, California headquarters (2-3 days per week in office), or remote.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Independently pursuing cutting-edge research in AI applied to biological problems</li>
<li>Building new models or fine-tuning existing models to identify biological changes resulting from disease</li>
<li>Building models that achieve high accuracy and that generalise robustly to new data</li>
<li>Applying contemporary interpretability techniques to provide a deeper understanding of the underlying signal identified by the model, ideally suggesting potential biological mechanisms</li>
<li>Working closely with ML Engineering partners to ensure that Freenome&#39;s computational infrastructure supports optimal model training and iteration</li>
<li>Taking a mindful, transparent, and humane approach to your work</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>PhD or equivalent research experience with an AI emphasis and in a relevant, quantitative field such as Computer Science, Statistics, Mathematics, Engineering, Computational Biology, or Bioinformatics</li>
<li>3+ years of postdoc or post-PhD industry experience achieving impactful results using relevant modelling techniques</li>
<li>Expertise, demonstrated by research publications or industry achievements, in applied machine learning, deep learning and complex data modelling</li>
<li>Practical and theoretical understanding of fundamental ML models like generalised linear models, kernel machines, decision trees and forests, neural networks</li>
<li>Practical and theoretical understanding of DL models like large language models or other foundation models</li>
<li>Extensive experience with training paradigms like supervised learning, self-supervised learning, and contrastive learning</li>
<li>Proficient in current state of the art in ML/DL approaches in different domains, with an ability to envision their applications in biological data</li>
<li>Proficiency in a general-purpose programming language: Python, R, Java, C, C++, etc.</li>
<li>Proficiency in one or more ML frameworks such as; Pytorch, Tensorflow and Jax; and ML platforms like Hugging Face</li>
<li>Experience in ML analysis and developer tools like TensorBoard, MLflow or Weights &amp; Biases</li>
<li>Excellent ability to communicate across disciplines, work collaboratively, and make progress in smaller steps via experimental iterations</li>
<li>A passion for innovation and demonstrated initiative in tackling new areas of research</li>
</ul>
<p>Nice to have qualifications include:</p>
<ul>
<li>Deep domain-specific experience in computational biology, genomics, proteomics or a related field</li>
<li>Experience in building DL models for genomic data, with knowledge of state-of-the-art DNA foundation models</li>
<li>Experience in NGS data analysis and bioinformatic pipelines</li>
<li>Experience with containerized cloud computing environments such as Docker in GCP, Azure, or AWS</li>
<li>Experience in a production software engineering environment, including the use of automated regression testing, version control, and deployment systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$173,775 - $246,750</Salaryrange>
      <Skills>PhD or equivalent research experience, Applied machine learning, Deep learning, Complex data modelling, Generalised linear models, Kernel machines, Decision trees and forests, Neural networks, Large language models, Supervised learning, Self-supervised learning, Contrastive learning, Python, R, Java, C, C++, Pytorch, Tensorflow, Jax, Hugging Face, TensorBoard, MLflow, Weights &amp; Biases</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Freenome</Employername>
      <Employerlogo>https://logos.yubhub.co/freenome.com.png</Employerlogo>
      <Employerdescription>Freenome is a biotechnology company focused on developing liquid biopsy tests for cancer.</Employerdescription>
      <Employerwebsite>https://freenome.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/freenome/jobs/7963050002</Applyto>
      <Location>Brisbane, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2bc207d0-89b</externalid>
      <Title>Senior Machine Learning Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Machine Learning Research Engineer to join the Machine Learning Science (MLS) team, within the Computational Science department. The ideal candidate has a strong knowledge in designing and building deep learning (DL) pipelines, and expertise in creating reliable, scalable artificial intelligence/machine learning (AI/ML) systems in a cloud environment.</p>
<p>The MLS team at Freenome develops DL models using massive-scale genomic data that presents significant challenges for current training paradigms. The Senior Machine Learning Research Engineer will primarily be responsible for developing and deploying the infrastructure needed to support development of such DL models: enabling distributed DL pipelines, optimising hardware utilisation for efficient training, and performing model optimisations.</p>
<p>As part of an interdisciplinary R&amp;D team, they will work in close collaboration with machine learning scientists, computational biologists and software engineers to accelerate the development of state-of-the-art ML/AI models and help Freenome achieve its mission.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Implementing and refining DL pipelines on distributed computing platforms to enhance the speed and efficiency of DL operations, including model training, data handling, model management, and inference.</li>
<li>Collaborating closely with ML scientists and software engineers to understand current challenges and requirements and ensure that the DL model development pipelines created are perfectly aligned with scientific goals and operational needs.</li>
<li>Continuously monitoring, evaluating, and optimising DL model training pipelines for performance and scalability.</li>
<li>Staying up to date with the latest advancements in AI, ML, and related technologies, and quickly learning and adapting new tools and frameworks, if necessary.</li>
<li>Developing and maintaining robust and reproducible DL pipelines that guarantee that DL pipelines can be reliably executed, maintaining consistency and accuracy of results.</li>
<li>Driving performance improvements across our stack through profiling, optimisation, and benchmarking. Implementing efficient caching solutions and debugging distributed systems to accelerate both training and evaluation pipelines.</li>
<li>Acting as a bridge facilitating communication between the engineering and scientific teams, documenting and sharing best practices to foster a culture of learning and continuous improvement.</li>
</ul>
<p>Must-haves include:</p>
<ul>
<li>MS or equivalent experience in a relevant, quantitative field such as Computer Science, Statistics, Mathematics, Software Engineering, with an emphasis on AI/ML theory and/or practical development.</li>
<li>5+ years of post-MS industry experience working on developing AI/ML software engineering pipelines.</li>
<li>Proficiency in a general-purpose programming language: Python (preferred), Java, Julia, C, C++, etc.</li>
<li>Strong knowledge of ML and DL fundamentals and hands-on experience with machine learning frameworks such as PyTorch, TensorFlow, Jax or Scikit-learn.</li>
<li>In-depth knowledge of scalable and distributed computing platforms that support complex model training (such as Ray or DeepSpeed) and their integration with ML developer tools like TensorBoard, Wandb, or MLflow.</li>
<li>Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) and how to deploy and manage AI/ML models and pipelines in a cloud environment.</li>
<li>Understanding of containerisation technologies (e.g., Docker) and computing resource orchestration tools (e.g., Kubernetes) for deploying scalable ML/AI solutions.</li>
<li>Proven track record of developing and optimising workflows for training DL models, large language models (LLMs), or similar for problems with high data complexity and volume.</li>
<li>Experience managing large datasets, including data storage (such as HDFS or Parquet on S3), retrieval, and efficient data processing techniques (via libraries and executors such as PyArrow and Spark).</li>
<li>Proficiency in version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) practices to maintain code quality and automate development workflows.</li>
<li>Expertise in building and launching large-scale ML frameworks in a scientific environment that supports the needs of a research team.</li>
<li>Excellent ability to work effectively with cross-functional teams and communicate across disciplines.</li>
</ul>
<p>Nice-to-haves include:</p>
<ul>
<li>Experience working with large-scale genomics or biological datasets.</li>
<li>Experience managing multimodal datasets, such as combinations of sequence, text, image, and other data.</li>
<li>Experience GPU/Accelerator programming and kernel development (such as CUDA, Triton or XLA).</li>
<li>Experience with infrastructure-as-code and configuration management.</li>
<li>Experience cultivating MLOps and ML infrastructure best practices, especially around reliability, provisioning and monitoring.</li>
<li>Strong track record of contributions to relevant DL projects, e.g. on github.</li>
</ul>
<p>The US target range of our base salary for new hires is $161,925 - $227,325. You will also be eligible to receive equity, cash bonuses, and a full range of medical, financial, and other benefits depending on the position offered.</p>
<p>Freenome is proud to be an equal-opportunity employer, and we value diversity. Freenome does not discriminate on the basis of race, colour, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$161,925 - $227,325</Salaryrange>
      <Skills>Python, Java, Julia, C, C++, PyTorch, TensorFlow, Jax, Scikit-learn, Ray, DeepSpeed, TensorBoard, Wandb, MLflow, AWS, Google Cloud, Azure, Docker, Kubernetes, Git, Continuous Integration/Continuous Deployment, Large-scale genomics or biological datasets, Multimodal datasets, GPU/Accelerator programming and kernel development, Infrastructure-as-code and configuration management, MLOps and ML infrastructure best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Freenome</Employername>
      <Employerlogo>https://logos.yubhub.co/freenome.com.png</Employerlogo>
      <Employerdescription>Freenome is a quantitative biology company that aims to reduce cancer mortality via accessible early detection.</Employerdescription>
      <Employerwebsite>https://freenome.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/freenome/jobs/8013673002</Applyto>
      <Location>Brisbane, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
  </jobs>
</source>