<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>1bd2d1b2-84f</externalid>
      <Title>Senior Machine Learning Researcher</Title>
      <Description><![CDATA[<p>We are seeking a senior machine learning researcher to join our Core AI team.</p>
<p>As part of the team, you will help solve complex business problems by developing viable cutting-edge AI/ML solutions.</p>
<p>You will develop and implement creative solutions that fundamentally transform business processes, delivering breakthrough improvements rather than incremental changes.</p>
<p>You will work closely with other AI/ML researchers and engineers, SWEs, product owners/managers, and business stakeholders, and participate in the full lifecycle of solution development, including requirements gathering with business, experimentation and algorithmic exploration, development, and assistance with productization.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work independently or as part of a team to help design and implement high accuracy and delightful user experience solutions utilizing ML, NLP, GenAI, Agentic technologies.</li>
</ul>
<ul>
<li>Participate in all aspects of solution development, including ideation and requirement gathering with business stakeholders, experimentation and exploration to identify strong solution approaches, solution development, etc.</li>
</ul>
<ul>
<li>Prototype, test, and iterate on novel AI models and approaches to solve complex business challenges.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to identify opportunities where AI can create significant business value, and transition solutions into production systems.</li>
</ul>
<ul>
<li>Research and stay updated with the latest advancements in machine learning and AI technologies.</li>
</ul>
<ul>
<li>Participate in code reviews, technical discussions, and knowledge sharing sessions.</li>
</ul>
<ul>
<li>Communicate technical concepts and transformative ideas effectively to both technical and non-technical stakeholders.</li>
</ul>
<p>Required Skills &amp; Qualifications:</p>
<ul>
<li>Bachelor&#39;s with 10+ years, Master&#39;s with 7+ years, or PhD with 5+ years in Computer Science, Data Science, Machine Learning, or related field.</li>
</ul>
<ul>
<li>Deep expertise and proven ability in developing high accuracy/value solutions to business problems in the NLP, Generative AI, Agentic AI, and/or ML space.</li>
</ul>
<ul>
<li>Hands-on experience with data processing, experimentation, and exploration.</li>
</ul>
<ul>
<li>Strong programming skills in Python.</li>
</ul>
<ul>
<li>Experience with cloud platforms (AWS, Azure, GCP) for deploying ML solutions.</li>
</ul>
<ul>
<li>Excellent problem-solving skills and attention to detail.</li>
</ul>
<ul>
<li>Strong communication skills to collaborate with technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Ability to work independently and collaboratively.</li>
</ul>
<p>Additional Preferred Skills &amp; Qualifications:</p>
<ul>
<li>Understanding of the financial markets, including experience with financial datasets, is strongly preferred.</li>
</ul>
<ul>
<li>Experience with ML frameworks such as PyTorch, TensorFlow.</li>
</ul>
<ul>
<li>Familiarity with MLOps practices and tools such as SageMaker, MLflow, or Airflow.</li>
</ul>
<ul>
<li>Previous experience working in an Agile environment.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Python, Machine Learning, NLP, GenAI, Agentic technologies, Data processing, Experimentation, Exploration, Cloud platforms (AWS, Azure, GCP), Problem-solving skills, Communication skills, PyTorch, TensorFlow, MLOps practices and tools (SageMaker, MLflow, Airflow), Agile environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT - Artificial Intelligence</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company focuses on artificial intelligence research and development.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954012324</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af8ed06d-a9a</externalid>
      <Title>Forward Deployed Software Engineer - Equities Technology</Title>
      <Description><![CDATA[<p>We are seeking a hands-on, business-facing engineer to join our team. In this role, you will partner directly with some of the most sophisticated quantitative researchers, developers, and portfolio managers in the industry.</p>
<p>Our team is a specialized group of engineers operating at the intersection of technology and quantitative finance. We function as an internal centre of excellence, providing expert-level solutions, architecture, and hands-on development in AI, Cloud (AWS/GCP), DevOps, and high-performance computing.</p>
<p>As a forward deployed software engineer, you will be responsible for translating complex research requirements into robust, scalable, and secure technical architectures across on-prem, hybrid, and cloud environments. You will write high-quality, production-ready code across the full stack, including Python libraries, infrastructure-as-code (Terraform), CI/CD pipelines, automation scripts, and ML/AI proof-of-concepts.</p>
<p>You will also develop and maintain our suite of managed products, reusable patterns, and best practice guides to provide self-service options and accelerate onboarding for new and existing teams. Additionally, you will act as the primary technical point of contact for embedded engagements, owning projects from discovery and planning through to implementation, knowledge transfer, and support.</p>
<p>To succeed in this role, you will need to have a deep understanding of computer science principles, including data structures, algorithms, and system design. You will also need to have experience working with cloud providers, such as AWS or GCP, and be familiar with infrastructure-as-code concepts. Excellent verbal and written communication skills are also essential, as you will need to build strong relationships with stakeholders and articulate complex ideas to diverse audiences.</p>
<p>Innovative thinking and a passion for AI/ML and its practical applications are highly desirable. Experience designing systems and architectures from ambiguous business needs, as well as experience with scheduling or asynchronous workflow frameworks/services, is also preferred.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Cloud computing (AWS/GCP), DevOps, Infrastructure-as-code (Terraform), CI/CD pipelines, Automation scripts, ML/AI proof-of-concepts, Data structures, Algorithms, System design, Experience in the financial services or fintech space, Experience building applications on top of LLMs using frameworks like LangChain or LlamaIndex, Experience with MLOps tooling and concepts, Cloud certifications (AWS or GCP)</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides technology solutions to the financial services industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953439247</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5dfa9c86-5c0</externalid>
      <Title>Director, US Forecasting &amp; Analytics– Vaccines &amp; Immune Therapies</Title>
      <Description><![CDATA[<p>Director, US Forecasting &amp; Analytics – Vaccines &amp; Immune Therapies Global Insights, Analytics &amp; Forecasting, BBU Hybrid Work- on average 3 days a week from office</p>
<p>The Director, US Forecasting &amp; Analytics – Vaccines &amp; Immune Therapies is a senior commercial insights leader responsible for US demand forecasting and analytics across the V&amp;I portfolio. The role is predominantly forecast-focused, serving as the US forecasting lead and strategic thought partner to Marketing, Finance, Market Access, and US teams.</p>
<p>Responsibilities:</p>
<p>US Forecasting Leadership (Core Accountability)</p>
<ul>
<li>Lead US short-term and long-term demand forecasts (TRx, NBRx, volume, patients, revenue) for V&amp;I assets using robust, patient-based and market-based models</li>
<li>Own forecast methodology, assumptions, and governance, ensuring objectivity, transparency, and consistency with enterprise standards</li>
<li>Integrate primary market research, epidemiology, competitive intelligence, access dynamics, and real-world data into forecast models</li>
<li>Proactively identify and quantify key risks and opportunities through scenario and sensitivity analyses</li>
<li>Partner closely with Finance, Market Access &amp; Pricing, Marketing, Sales, Medical, and Global Forecasting to ensure alignment on assumptions and implications</li>
<li>Support business planning, governance reviews, and opportunity assessments with clear, executive-ready narratives</li>
<li>Serve as a trusted advisor to senior marketing and finance leadership, clearly articulating forecast drivers and changes</li>
</ul>
<p>Analytics &amp; Resource Leadership (Enablement)</p>
<ul>
<li>Provide leadership over forecasting-adjacent analytics, ensuring advanced analytics and insights are embedded into forecasting and business planning</li>
<li>Manage and prioritize internal analysts, contractors, and external vendors supporting forecasting and analytics deliverables</li>
<li>Partner with data analytics resources, Global IA&amp;F, and GIBEX capability teams to deploy new tools, data sources, and modeling approaches</li>
<li>Champion and identify new ways to embed AI and advanced automation into the practice of data analytics and forecasting to drive efficiency, scalability, and decision quality</li>
<li>Champion continuous improvement in forecasting processes, AI-enabled modeling, and automation</li>
<li>Contribute to the development and sharing of best practices across the V&amp;I forecasting community</li>
</ul>
<p>Essential for the role</p>
<ul>
<li>Bachelor’s degree in a quantitative, scientific, or business-related field required (e.g., Statistics, Economics, Mathematics, Engineering, Computer/Data Science).</li>
<li>8+ years’ experience in US pharmaceutical commercial forecasting, including in-market and late-stage pipeline assets</li>
<li>Hands-on model ownership experience (build, refresh, and performance tracking) across short- and long-term horizons</li>
<li>Expertise in scenario-based forecasting, sensitivity analysis, and driver-based narratives to support senior decision-making</li>
<li>Strong capability integrating multiple data types (e.g., IQVIA, claims, epidemiology, RWD/RWE, primary research) into coherent, decision-grade forecasts</li>
<li>Working knowledge of advanced analytics/ML approaches (e.g., time series, causal inference, ensembles) and where they add value vs. traditional methods</li>
<li>Fluency in modern analytics tooling and automation (e.g., Python/R/SQL, BI/visualization), with ability to partner effectively with data engineering and analytics teams</li>
<li>Demonstrated forecast governance and model risk discipline (traceable assumptions, documentation, and clear explanations)</li>
<li>Strong understanding of US market access and payer dynamics and how they impact demand (coverage, contracting, channel, policy)</li>
<li>Exceptional communication: translates complex analysis into clear, executive-ready insights, options, and recommendations</li>
<li>Strong commercial competence across key demand levers (positioning, adoption, competitive dynamics, lifecycle events)</li>
</ul>
<p>Desirable for the role</p>
<ul>
<li>Advanced degree preferred (e.g., MBA, MS, PhD in Statistics, Economics, Decision Sciences, Data Science, or related discipline).</li>
<li>Vaccines and/or Rare Disease experience, including familiarity with immunization dynamics, patient-based forecasting, and lifecycle management in preventive or immune-mediated therapies</li>
<li>Change leadership: builds adoption for new tools, processes, and ways of working across cross-functional stakeholders</li>
<li>Product mindset for forecasting: defines user needs, success metrics, and a roadmap for portfolio forecasting capabilities</li>
<li>Model lifecycle practices (e.g., reproducibility, versioning, monitoring/drift awareness); familiarity with MLOps concepts</li>
</ul>
<p>Office Working Requirements</p>
<p>When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That’s why we work, on average, a minimum of three days per week from the office. But that doesn’t mean we’re not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world.</p>
<p>#LI-Hybrid</p>
<p>Date Posted 10-Apr-2026 Closing Date 23-Apr-2026 Our mission is to build an inclusive environment where equal employment opportunities are available to all applicants and employees. In furtherance of that mission, we welcome and consider applications from all qualified candidates, regardless of their protected characteristics. If you have a disability or special need that requires accommodation, please complete the corresponding section in the application form.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>forecasting, analytics, model ownership, scenario-based forecasting, sensitivity analysis, driver-based narratives, advanced analytics, machine learning, Python, R, SQL, BI/visualization, data engineering, forecast governance, model risk discipline, US market access, payer dynamics, exceptional communication, commercial competence, vaccines, rare disease, change leadership, product mindset, model lifecycle practices, MLOps</Skills>
      <Category>Finance</Category>
      <Industry>Healthcare</Industry>
      <Employername>Global Insights, Analytics &amp; Forecasting - V&amp;I</Employername>
      <Employerlogo>https://logos.yubhub.co/astrazeneca.eightfold.ai.png</Employerlogo>
      <Employerdescription>AstraZeneca&apos;s Global Insights, Analytics &amp; Forecasting - V&amp;I division focuses on providing insights and analytics to support vaccine and immune therapy development.</Employerdescription>
      <Employerwebsite>https://astrazeneca.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689756206</Applyto>
      <Location>Wilmington, Delaware, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>04c1ff49-2d1</externalid>
      <Title>Data Platform Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Data Platform Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 10% of the time</li>
</ul>
<p>[Preferred] Databricks Certification but not essential</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, technical project delivery, documentation and white-boarding skills, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8396801002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>19fc414d-dcc</externalid>
      <Title>Specialist Solutions Architect - AI &amp; ML (Communications, Media, Entertainment &amp; Games)</Title>
      <Description><![CDATA[<p>As a Specialist Solutions Architect - AI &amp; ML Engineer, you will be the trusted technical ML &amp; AI expert to both Databricks customers and the Field Engineering organisation.</p>
<p>You will work with Solution Architects to guide customers in architecting production-grade ML &amp; AI applications on Databricks, while aligning their technical roadmap with the continually evolving Databricks Data Intelligence Platform.</p>
<p>You will continue to strengthen your technical skills through applying cutting-edge technologies in GenAI, MLOps, and ML more broadly, expanding your impact through mentorship, and establishing yourself as an AI thought leader.</p>
<p>The impact you will have:</p>
<ul>
<li>Architect production-level ML &amp; AI workloads for customers using our unified platform, including agents, end-to-end ML pipelines, training/inference optimisation, integration with cloud-native services, MLOps, etc.</li>
</ul>
<ul>
<li>Serve as trusted practitioner for enterprise GenAI solutions, including RAG architectures, agentic systems (tool-calling agents, multi-agent orchestration, guardrails), natural language querying of structured data, AI evaluation and observability, and monitoring systems</li>
</ul>
<ul>
<li>Build, scale, and optimise customer AI workloads and apply best-in-class MLOps to productionise these workloads across a variety of domains</li>
</ul>
<ul>
<li>Provide advanced technical support to Solution Architects during the technical sale ranging from feature engineering, training, tracking, serving to model monitoring all within a single platform, as well as participating in the larger ML SME community in Databricks</li>
</ul>
<ul>
<li>Collaborate cross-functionally with the product and engineering teams to represent the voice of the customer, define priorities and influence the product roadmap, helping with the adoption of Databricks&#39; AI offerings</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of hands-on industry ML experience in at least one of the following:</li>
</ul>
<ul>
<li>ML Engineer: Build and maintain production-grade cloud (AWS/Azure/GCP) infrastructure that supports the deployment of ML applications, including drift monitoring.</li>
</ul>
<ul>
<li>AI Engineer: Experience with the latest techniques in LLMs &amp; agentic systems including vector databases, fine-tuning LLMs, AI guardrail systems, and deploying LLMs with tools such as HuggingFace, Langchain, and OpenAI</li>
</ul>
<ul>
<li>Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience</li>
</ul>
<ul>
<li>Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike</li>
</ul>
<ul>
<li>Passion for collaboration, life-long learning, and driving business value through ML &amp; AI</li>
</ul>
<ul>
<li>[Preferred] 2+ years customer-facing experience in a pre-sales or post-sales role</li>
</ul>
<ul>
<li>Can meet expectations for technical training and role-specific outcomes within 3 months of hire</li>
</ul>
<ul>
<li>This role can be remote, but we prefer that you be located in the job listing area and can travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilising the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here. Local Pay Range $219,100-$301,300 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$219,100-$301,300 USD</Salaryrange>
      <Skills>ML Engineer, AI Engineer, GenAI, MLOps, Cloud-Native Services, Vector Databases, Fine-Tuning LLMs, AI Guardrail Systems, HuggingFace, Langchain, OpenAI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8480547002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5b244f27-9fd</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases. You will work with engagement managers to scope variety of professional services work with input from the customer.</p>
<p>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications. Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</p>
<p>Provide an escalated level of support for customer operational issues. You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</p>
<p>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</p>
<p>The ideal candidate will have 6+ years experience in data engineering, data platforms &amp; analytics, comfortable writing code in either Python or Scala, working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one, deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals, familiarity with CI/CD for production deployments, working knowledge of MLOps, design and deployment of performant end-to-end data architectures, experience with technical project delivery - managing scope and timelines, documentation and white-boarding skills, experience working with clients and managing conflicts, build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</p>
<p>Travel to customers 20% of the time.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461258002</Applyto>
      <Location>Raleigh, North Carolina</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fdc6f0f9-900</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, distributed computing, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461168002</Applyto>
      <Location>Los Angeles, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae6df2c2-eb1</externalid>
      <Title>DevOps Engineer, Infrastructure &amp; Security</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, Infrastructure &amp; Security at Scale, you will play a crucial role in building out and enhancing our CI/CD pipelines. Our product portfolio and customer base are expanding, and we need skilled engineers to streamline our Software Development Life Cycle (SDLC) through collaborative efforts.</p>
<p>You will design, develop, and maintain robust CI/CD pipelines to automate the deployment of our lowside and highside products. You will collaborate closely with product and engineering teams to enhance existing application code for improved compatibility and streamlined integration within automated pipelines.</p>
<p>Contribute to the overall architecture and design of our deployment systems, bringing new ideas to life for increased efficiency and reliability. Troubleshoot and resolve complex deployment issues, ensuring minimal disruption to development cycles.</p>
<p>Develop a deep understanding of our product and ML architectures to facilitate seamless integration and deployment. Document pipeline processes and configurations to ensure maintainability and knowledge transfer.</p>
<p>Proactively incorporate security best practices into all stages of the CI/CD pipeline, building security into our development processes. Drive standardization and foster collaboration across different product teams to achieve a unified and efficient SDLC.</p>
<p>We are looking for experienced DevOps Engineers, DevSecOps Engineers, Software Engineers with a strong focus on CI/CD, or a similar role. You should have a proven track record of building or significantly enhancing CI/CD pipelines.</p>
<p>Experience configuring and adapting application code to integrate seamlessly with evolving CI/CD environments is a plus. Familiarity with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. is also required.</p>
<p>We offer a competitive salary range of $245,600-$307,000 USD, comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$245,600-$307,000 USD</Salaryrange>
      <Skills>CI/CD, Kubernetes, Terraform, Docker, Python, Bash, PowerShell, Jenkins, GitLab CI, GitHub Actions, Azure DevOps, AWS, Azure, GCP, Security best practices, Containerization technologies, Machine learning lifecycles, MLOps concepts, Prior experience in classified environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674863005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2f962d3f-14e</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461218002</Applyto>
      <Location>Dallas, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7bc4518a-7e3</externalid>
      <Title>AI Applications Ops Lead, GPS</Title>
      <Description><![CDATA[<p><strong>Role Overview</strong></p>
<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for national LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Production AI Ops Lead, you will design and develop the production lifecycle of full-stack AI applications, while supporting end-to-end system reliability, real-time inference observability, sovereign data orchestration, high-security software integration, and the resilient cloud infrastructure required for our international government partners.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the production outcome: Take full accountability for the long-term performance and reliability of AI use cases deployed across international government agencies.</li>
</ul>
<ul>
<li>Ensure Full-Stack integrity: Oversee the end-to-end health of the platform, ensuring seamless integration between the AI core and all full-stack components, from APIs to UI, to maintain a responsive and production-ready environment.</li>
</ul>
<ul>
<li>Scale the feedback loop: Build automated systems to monitor model performance and data drift across geographically dispersed environments, ensuring the right levels of reliability.</li>
</ul>
<ul>
<li>Navigate global compliance: Manage the technical lifecycle within diverse regulatory frameworks.</li>
</ul>
<ul>
<li>Incident command: Lead the response for production issues in mission-critical environments, ensuring rapid resolution and building the guardrails to prevent them from happening again.</li>
</ul>
<ul>
<li>Bridge the gap: Translate deep technical performance metrics into clear insights for senior international government officials.</li>
</ul>
<ul>
<li>Drive product evolution: Partner with our Engineering and ML teams to ensure the lessons learned in the field directly influence the technical architecture and decisions of future use cases.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Experience: 6+ years in a high-impact technical role (SRE, FDE or MLOps) with experience in the public sector.</li>
</ul>
<ul>
<li>Global perspective: Familiarity with international government security standards and the complexities of deploying sovereign AI.</li>
</ul>
<ul>
<li>System architecture proficiency: Proven experience maintaining production-grade applications with a deep understanding of the full request lifecycle-connecting frontend/API layers to the backend and AI core.</li>
</ul>
<ul>
<li>Modern AI Stack expertise: Proficiency in coding and the modern AI infrastructure, including Kubernetes, vector databases, agentic development, and LLM observability tools.</li>
</ul>
<ul>
<li>Ownership: You treat every production deployment as your own. You race toward solving hard problems before the customer even sees them.</li>
</ul>
<ul>
<li>Reliability: You understand that in the public sector, a model failure may be a risk to public safety or privacy.</li>
</ul>
<ul>
<li>Customer communication: The ability to explain to a high-ranking official why the performance of the system has degraded and how we are fixing it.</li>
</ul>
<p><strong>About Us</strong></p>
<p>At Scale, our mission is to develop reliable AI systems for the world&#39;s most important decisions. Our products provide the high-quality data and full-stack technologies that power the world&#39;s leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Vector databases, Agentic development, LLM observability tools, SRE, FDE, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4654510005</Applyto>
      <Location>Doha, Qatar; London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0036f074-845</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456966002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a7cad02-cd5</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494155002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ccb5daf2-354</externalid>
      <Title>Sr. ML Ops Engineer, tvScientific</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior MLOps Engineer to join our distributed engineering team on our Connected TV ad-buying platform. As a Senior MLOps Engineer, you will be responsible for scaling the decision-making process for tools for the tvScientific AI team, improving the developer experience for the data science team, upgrading our observability tooling, serving as a technical lead and mentor to the team, and making every deployment smooth as our infrastructure evolves.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Scaling the decision-making process for tools for the tvScientific AI team, from our workflows to our training infrastructure to our Kubernetes deployments</li>
<li>Improving the developer experience for the data science team</li>
<li>Upgrading our observability tooling</li>
<li>Serving as a technical lead and mentor to the team</li>
<li>Making every deployment smooth as our infrastructure evolves</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Deep understanding of Linux</li>
<li>Excellent writing skills</li>
<li>A systems-oriented mindset</li>
<li>Experience in high-performance software (RTB, HFT, etc.)</li>
<li>Software engineering experience + reliability (e.g. CI/CD) expertise</li>
<li>Strong observability instincts</li>
<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs</li>
<li>Strong track record of critical evaluation and verification of AI-assisted work (e.g., testing, source-checking, data validation, peer review)</li>
<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables</li>
</ul>
<p>Nice-to-haves include:</p>
<ul>
<li>Reverse-engineering experience</li>
<li>Terraform, EKS, or MLOps experience</li>
<li>Python, Scala, or Zig experience</li>
<li>NixOS experience</li>
<li>Adtech or CTV experience</li>
<li>Experience deploying a distributed system across multiple clouds</li>
<li>Experience in hard real-time low-latency</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$155,584-$320,320 USD</Salaryrange>
      <Skills>Linux, writing skills, systems-oriented mindset, high-performance software, software engineering, reliability, observability, AI, critical evaluation, verification, data protection, data validation, peer review, reverse-engineering, Terraform, EKS, MLOps, Python, Scala, Zig, NixOS, adtech, CTV, distributed system, hard real-time low-latency</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>tvScientific</Employername>
      <Employerlogo>https://logos.yubhub.co/tvscientific.com.png</Employerlogo>
      <Employerdescription>tvScientific is a CTV advertising platform purpose-built for performance marketers, leveraging massive data and cutting-edge science to automate and optimize TV advertising.</Employerdescription>
      <Employerwebsite>https://www.tvscientific.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7642249</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc79e6e5-5c0</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, Data engineering, Data science, Cloud technology</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494156002</Applyto>
      <Location>Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2a2d718a-f65</externalid>
      <Title>Senior Software Engineer, AI Platform and Enablement</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re building a next-generation AI-powered platform and web application for creating audio and video content quickly and easily. This involves developing a revolutionary way to record, transcribe, edit, and mix audio and video on the web using state-of-the-art AI models,a challenge that requires solving complex technical problems. We&#39;re hiring a senior engineer to join our AI Platform and Enablement team. The ideal candidate thrives in a fast-moving, high-ownership environment and is comfortable navigating the ambiguity of bringing research work into an established product.</p>
<p><strong>About the Team</strong></p>
<p>The team’s objective is to support integrating cutting-edge first-party models (developed by our in-house AI Research team) and third-party/open source AI models into the Descript product.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, maintain, and standardize third-party model integrations, including consulting for other engineering teams with AI model integration needs</li>
</ul>
<ul>
<li>Design, implement, and maintain our AI infrastructure supporting our machine learning life cycle, including data ingestion pipelines, training developer experience and infrastructure, evaluation frameworks, and deployments / GPU infrastructure</li>
</ul>
<ul>
<li>Collaborate with Product Managers, Research Engineers, and AI Researchers to understand their infrastructure needs and ensure our AI systems are robust, scalable, and efficient</li>
</ul>
<ul>
<li>Optimise and scale our models and algorithms for efficient inference</li>
</ul>
<ul>
<li>Deploy, monitor, and manage AI models in production</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>Experience in deploying and managing AI models in production</li>
</ul>
<ul>
<li>Experience with the tools of large volume data pipelines like spark, flume, dask, etc.</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps and MLOps best practices</li>
</ul>
<ul>
<li>Strong problem-solving abilities and excellent communication skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Generous healthcare package</li>
</ul>
<ul>
<li>401k matching program</li>
</ul>
<ul>
<li>Catered lunches</li>
</ul>
<ul>
<li>Flexible vacation time</li>
</ul>
<p><strong>Fun fact about me: I love pineapple on pizza.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $286,000/year</Salaryrange>
      <Skills>Experience in deploying and managing AI models in production, Experience with the tools of large volume data pipelines like spark, flume, dask, etc., Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes), Knowledge of DevOps and MLOps best practices, Strong problem-solving abilities and excellent communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Descript</Employername>
      <Employerlogo>https://logos.yubhub.co/descript.com.png</Employerlogo>
      <Employerdescription>Descript is building a simple, intuitive, fully-powered editing tool for video and audio. It has 150 employees and is backed by OpenAI, Andreessen Horowitz, Redpoint Ventures, and Spark Capital.</Employerdescription>
      <Employerwebsite>https://descript.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/descript/jobs/7580335003</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>48e2e160-bde</externalid>
      <Title>Senior Solutions Architect - Weights &amp; Biases</Title>
      <Description><![CDATA[<p>Our Solutions Architecture team at Weights &amp; Biases is a unique hybrid organization, combining the deep technical skills of Site Reliability Engineering with the consultative expertise of Solutions Architecture. We focus on ensuring customers can successfully deploy and operate W&amp;B across cloud and on-prem environments while delivering a best-in-class experience that accelerates ML adoption at scale.</p>
<p>As a Solutions Architect, you will be responsible for managing complex customer deployments across AWS, GCP, Azure, and on-prem environments. You’ll partner directly with customer engineering teams to provision and monitor services, debug and resolve infrastructure issues, and ensure performance and scalability using SRE best practices. This role blends hands-on technical problem-solving with customer-facing engagement, including technical discussions, demos, workshops, and enablement content creation. You’ll work closely with Sales Engineering, Field Engineering, Support, and Product to drive adoption and influence our product roadmap based on customer feedback.</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love diving into infrastructure problems and solving them systematically</li>
<li>You’re curious about how to scale complex ML systems in production environments</li>
<li>You’re an expert in building and running containerized, distributed systems</li>
</ul>
<p>We work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary ranges for this role is $180,000 to $200,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 to $200,000</Salaryrange>
      <Skills>Docker, Kubernetes, Helm charts, Networking, Cloud-managed services (e.g., MySQL, Object Stores), Infrastructure as Code (IaC), preferably Terraform, Linux/Unix command line experience, Python, ML workflows or tools, Deep proficiency in Kubernetes design patterns, including Operators, Familiarity with data engineering and MLOps tooling, Experience as an educator or facilitator for technical training sessions, workshops, or demos, SaaS, web service, or distributed systems operations experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4622845006</Applyto>
      <Location>Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10ceb713-2cf</externalid>
      <Title>Specialist Solutions Architect - AI &amp; ML (Financial Services)</Title>
      <Description><![CDATA[<p>As a Specialist Solutions Architect - AI &amp; ML Engineer, you will be the trusted technical ML &amp; AI expert to both Databricks customers and the Field Engineering organization.</p>
<p>You will work with Solution Architects to guide customers in architecting production-grade ML &amp; AI applications on Databricks, while aligning their technical roadmap with the continually evolving Databricks Data Intelligence Platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Architecting production-level ML &amp; AI workloads for customers using our unified platform, including agents, end-to-end ML pipelines, training/inference optimization, integration with cloud-native services, MLOps, etc.</li>
</ul>
<ul>
<li>Serving as a trusted practitioner for enterprise GenAI solutions, including RAG architectures, agentic systems (tool-calling agents, multi-agent orchestration, guardrails), natural language querying of structured data, AI evaluation and observability, and monitoring systems</li>
</ul>
<ul>
<li>Building, scaling, and optimizing customer AI workloads and applying best-in-class MLOps to productionize these workloads across a variety of domains</li>
</ul>
<ul>
<li>Providing advanced technical support to Solution Architects during the technical sale ranging from feature engineering, training, tracking, serving to model monitoring all within a single platform, as well as participating in the larger ML SME community in Databricks</li>
</ul>
<ul>
<li>Collaborating cross-functionally with the product and engineering teams to represent the voice of the customer, define priorities and influence the product roadmap, helping with the adoption of Databricks&#39; AI offerings</li>
</ul>
<p>We are looking for someone with 5+ years of hands-on industry ML experience in at least one of the following areas:</p>
<ul>
<li>ML Engineer: Build and maintain production-grade cloud (AWS/Azure/GCP) infrastructure that supports the deployment of ML applications, including drift monitoring.</li>
</ul>
<ul>
<li>AI Engineer: Experience with the latest techniques in LLMs &amp; agentic systems including vector databases, fine-tuning LLMs, AI guardrail systems, and deploying LLMs with tools such as HuggingFace, Langchain, and OpenAI</li>
</ul>
<p>A graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience is also required.</p>
<p>Additionally, experience communicating and/or teaching technical concepts to non-technical and technical audiences alike is highly valued.</p>
<p>The salary range for this position is $180,000-$247,500 USD, depending on location.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>ML Engineer, AI Engineer, GenAI, MLOps, Cloud Native Services, Vector Databases, Fine-Tuning LLMs, AI Guardrail Systems, HuggingFace, Langchain, OpenAI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8434243002</Applyto>
      <Location>Central - United States; Northeast - United States; Southeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>42af3f66-4fc</externalid>
      <Title>AI Infrastructure Architect</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>AI Infrastructure Architect</strong></p>
<p>About the Role</p>
<p>We are looking for a smart and versatile AI Infrastructure Architect to build and evolve the AI infrastructure and platform that powers our identity security solutions. Your work will enable internal teams and product groups to integrate AI capabilities safely, securely, and at scale,empowering Okta’s mission to protect millions of digital identities worldwide. While your primary focus will be to architect scalable, secure, and resilient infrastructure supporting AI-driven tools, frameworks, and identity services, we value someone who isn’t afraid to get hands-on when needed to help solve complex challenges and drive projects forward.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Lead AI enablement initiatives, including proof-of-concepts for emerging AI infrastructure technologies and integration approaches.</li>
</ul>
<ul>
<li>Collaborate cross-functionally with engineering, security, data science, and product teams to align AI platform architecture with business and security goals.</li>
</ul>
<ul>
<li>Architect scalable, resilient, and secure AI infrastructure that supports AI-powered tools and features across Okta’s Identity Platform.</li>
</ul>
<ul>
<li>Lead infrastructure decisions across AWS, GCP, or hybrid environments with a focus on secure identity data handling</li>
</ul>
<ul>
<li>Develop and maintain infrastructure-as-code frameworks (e.g., Terraform, Helm) to ensure consistent, reproducible deployment of AI services</li>
</ul>
<ul>
<li>Champion security and compliance by embedding data privacy and identity protection standards directly into the AI platform and infrastructure design.</li>
</ul>
<ul>
<li>Serve as the key advocate and strategist for AI-driven efficiency initiatives across infrastructure platform teams and pre-production systems.</li>
</ul>
<ul>
<li>Implement robust MLOps practices, such as model evaluation, rollback strategies, and A/B testing, to guarantee the reliability and governance of AI in production.</li>
</ul>
<ul>
<li>Drive continuous innovation by staying current with AI and cloud infrastructure trends and evangelizing best practices internally.</li>
</ul>
<p><strong>Desired Qualifications</strong></p>
<ul>
<li>10+ years in infrastructure or software engineering, with ≥ 2 years building AI/ML systems</li>
</ul>
<ul>
<li>Exceptional systems level thinking and a track record in architecting and building enterprise grade infrastructure</li>
</ul>
<ul>
<li>Deep expertise in cloud platforms (AWS, GCP), distributed systems, and container orchestration (Kubernetes)</li>
</ul>
<ul>
<li>Expected to be very hands-on in order to create, review, and contribute large chunks of quality code</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Experience in identity, security, fraud, or risk analytics domains.</li>
</ul>
<ul>
<li>Experience operationalizing large language models or foundation models in production environments.</li>
</ul>
<ul>
<li>Contributions to MLOps or infrastructure open-source projects.</li>
</ul>
<p><strong>What You’ll Gain</strong></p>
<ul>
<li>Opportunity to lead infrastructure shaping AI systems that protect millions of identity transactions.</li>
</ul>
<ul>
<li>Be at the core of building efficient and AI powered enterprise grade solutions that touch internal and external customers alike.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$235,000-$353,000 USD</Salaryrange>
      <Skills>cloud platforms, distributed systems, container orchestration, infrastructure-as-code, MLOps, AI infrastructure, security and compliance, data privacy and identity protection, identity and security, fraud and risk analytics, large language models and foundation models, open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a provider of identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7122284</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d70a8194-b84</externalid>
      <Title>Software Engineer, Machine Learning</Title>
      <Description><![CDATA[<p>We are seeking a versatile and experienced Machine Learning / AI Engineer to join our growing AI team, working at the intersection of applied machine learning, infrastructure, and product innovation. Your work will drive user productivity, shape new product experiences, and advance the state of AI at Figma.</p>
<p>As a Machine Learning / AI Engineer, you will design, build, and productionize ML models for Search, Discovery, Ranking, Retrieval-Augmented Generation (RAG), and generative AI features. You will also build and maintain scalable data pipelines to collect high-quality training and evaluation datasets, including annotation systems and human-in-the-loop workflows.</p>
<p>You will collaborate closely with engineers, researchers, designers, and product managers across multiple teams to deliver high-quality ML-driven features and infrastructure. This is a high-impact, cross-functional role where you will shape both foundational systems and user-facing capabilities.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design, build, and productionize ML models for Search, Discovery, Ranking, Retrieval-Augmented Generation (RAG), and generative AI features.</li>
<li>Build and maintain scalable data pipelines to collect high-quality training and evaluation datasets, including annotation systems and human-in-the-loop workflows.</li>
<li>Collaborate with AI researchers to iterate on datasets, evaluation metrics, and model architectures to improve quality and relevance.</li>
<li>Work with product engineers to define and deliver impactful AI features across Figma&#39;s platform.</li>
<li>Partner with infrastructure engineers to develop and optimize systems for training, inference, monitoring, and deployment.</li>
<li>Explore new ideas at the edge of what&#39;s technically possible and help shape the long-term AI vision at Figma.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years of industry experience in software engineering, with 3+ years focused on applied machine learning or AI.</li>
<li>Strong experience with end-to-end ML model development, including training, evaluation, deployment, and monitoring.</li>
<li>Proficiency in Python and familiarity with ML libraries like PyTorch, TensorFlow, Scikit-learn, Spark MLlib, or XGBoost.</li>
<li>Experience designing and building scalable data and annotation pipelines, as well as evaluation systems for AI model quality.</li>
<li>Experience mentoring or leading others and contributing to a culture of technical excellence and innovation.</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Familiarity with search relevance, ranking, NLP, or RAG systems.</li>
<li>Experience with AI infrastructure and MLOps, including observability, CI/CD, and automation for ML workflows.</li>
<li>Experience working on creative or design-focused ML applications.</li>
<li>Knowledge of additional languages such as C++ or Go is a plus, but not required.</li>
<li>A product mindset with the ability to tie technical work to user outcomes and business impact.</li>
<li>Strong collaboration and communication skills, especially when working across functions (engineering, product, research).</li>
</ul>
<p>At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you&#39;re excited about this role but your past experience doesn&#39;t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$153,000-$376,000 USD</Salaryrange>
      <Skills>Machine Learning, AI, Python, PyTorch, TensorFlow, Scikit-learn, Spark MLlib, XGBoost, Data Pipelines, Annotation Systems, Human-in-the-loop Workflows, Search Relevance, Ranking, NLP, RAG Systems, AI Infrastructure, MLOps, Observability, CI/CD, Automation, Creative or Design-Focused ML Applications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Figma</Employername>
      <Employerlogo>https://logos.yubhub.co/figma.com.png</Employerlogo>
      <Employerdescription>Figma is a design and collaboration platform that helps teams bring ideas to life. It was founded in 2012 and has since grown to become a leading player in the design and collaboration space.</Employerdescription>
      <Employerwebsite>https://www.figma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/figma/jobs/5551532004</Applyto>
      <Location>San Francisco, CA • New York, NY • United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4ea7999b-3d8</externalid>
      <Title>Resident Solutions Architect - Healthcare &amp; Life Sciences</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494145002</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ca21d379-481</externalid>
      <Title>AI Solutions Engineer, Post Sales- W&amp;B</Title>
      <Description><![CDATA[<p>The Field Engineering team at Weights &amp; Biases plays a vital role in ensuring customer success and adoption of our platform. As part of this team, we partner with Sales, Support, Product, and Engineering to lead technical success after the sales process.</p>
<p>We work closely with some of the most advanced AI teams in the world, helping them build, optimize, and scale their ML and GenAI workflows across industries such as computer vision, robotics, natural language processing, and large language models (LLMs).</p>
<p>We’re hiring an AI Solutions Engineer, Post-Sales to help customers solve real-world problems by enabling them to implement and scale ML pipelines and agentic workflows using Weights &amp; Biases. In this role, you’ll collaborate with engineering teams to ensure smooth onboarding and adoption, act as a trusted advisor on best practices, and represent the voice of the customer internally.</p>
<p>You will partner directly with leading AI teams to optimize workflows, share technical expertise, and influence our product roadmap based on real-world customer feedback.</p>
<p>This is an ideal opportunity for ML practitioners who are customer-focused and eager to work with top AI companies globally.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Collaborate with engineering teams to ensure smooth onboarding and adoption of Weights &amp; Biases</li>
<li>Act as a trusted advisor on best practices for implementing and scaling ML pipelines and agentic workflows</li>
<li>Represent the voice of the customer internally and influence our product roadmap based on real-world customer feedback</li>
<li>Partner directly with leading AI teams to optimize workflows and share technical expertise</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3–5 years of relevant experience in a similar role</li>
<li>Strong programming proficiency in Python</li>
<li>Hands-on experience enabling production-grade ML systems, with a focus on training and inference pipelines, experiment tracking, deployment patterns, and observability using deep learning frameworks (TensorFlow/Keras, PyTorch/PyTorch Lightning) and MLOps tooling (e.g. Airflow, Kubeflow, Ray, TensorRT)</li>
<li>Familiarity with cloud platforms (AWS, GCP, Azure)</li>
<li>Experience with GenAI/LLMs and related tools (e.g. LangChain/LangGraph, HuggingFace Transformers, Pinecone, Weaviate)</li>
<li>Strong experience with Linux/Unix</li>
<li>Excellent communication and presentation skills, both written and verbal</li>
<li>Ability to break down and solve complex problems through customer consultation and execution</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Background in robotics</li>
<li>TypeScript experience</li>
<li>Proficiency with Fastai, scikit-learn, XGBoost, or LightGBM</li>
<li>Background in data engineering, MLOps, or LLMOps, with tools such as Docker and Kubernetes</li>
<li>Familiarity with data pipeline tools</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Python, ML systems, deep learning frameworks, MLOps tooling, cloud platforms, GenAI/LLMs, Linux/Unix, communication and presentation skills, robotics, TypeScript, Fastai, scikit-learn, XGBoost, LightGBM, data engineering, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. It became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4651106006</Applyto>
      <Location>Livingston, NJ / New York, NY / Philadelphia, PA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>059293a1-afa</externalid>
      <Title>Systems Engineer, Data</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About the Team</p>
<p>The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions. We facilitate access to data from federated sources across the company for dashboarding, ad-hoc querying and in-product use cases. We power data pipelines and data products, secure and monitor data, and drive data governance at Cloudflare.</p>
<p>Our work enables every individual at the company to act with greater information and make more informed decisions.</p>
<p>About the Role</p>
<p>We are looking for a systems engineer with a strong background in data to help us expand and maintain our data infrastructure. You’ll contribute to the technical implementation of our scaling data platform, manage access while accounting for privacy and security, build data pipelines, and develop tools to automate accessibility and usefulness of data. You’ll collaborate with teams including Product Growth, Marketing, and Billing to help them make informed decisions and power usage-based invoicing platforms, as well as work with product teams to bring new data-driven solutions to Cloudflare customers.</p>
<p>Responsibilities</p>
<ul>
<li>Contribute to the design and execution of technical architecture for highly visible data infrastructure at the company.</li>
<li>Design and develop tools and infrastructure to improve and scale our data systems at Cloudflare.</li>
<li>Build and maintain data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</li>
<li>Gain deep knowledge of our data platforms and tools to guide and enable stakeholders with their data needs.</li>
<li>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</li>
<li>Collaborate with peers to reinforce a culture of exceptional delivery and accountability on the team.</li>
</ul>
<p>Requirements</p>
<ul>
<li>3-5+ years of experience as a software engineer with a focus on building and maintaining data infrastructure.</li>
<li>Experience participating in technical initiatives in a cross-functional context, working with stakeholders to deliver value.</li>
<li>Practical experience with data infrastructure components, such as Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, or PostgreSQL.</li>
<li>Hands-on experience building and debugging data pipelines.</li>
<li>Proficient using backend languages like Go, Python, or Typescript, along with strong SQL skills.</li>
<li>Strong analytical skills, with a focus on understanding how data is used to drive business value.</li>
<li>Solid communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.</li>
</ul>
<p>Desirable Skills</p>
<ul>
<li>Experience with data orchestration and infrastructure platforms like Airflow and DBT.</li>
<li>Experience deploying and managing services in Kubernetes.</li>
<li>Familiarity with data governance processes, privacy requirements, or auditability.</li>
<li>Interest in or knowledge of machine learning models and MLOps.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data infrastructure, data pipelines, data products, Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, data orchestration, infrastructure platforms, Airflow, DBT, machine learning models, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by powering millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7527453</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>85f1f87e-70f</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
</ul>
<ul>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461327002</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ffd169d9-40b</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, data platforms &amp; analytics, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified data intelligence platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461239002</Applyto>
      <Location>Atlanta, Georgia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bac99a46-7f5</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461243002</Applyto>
      <Location>Denver, Colorado</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>26f523c0-bbd</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494154002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>30a09520-889</externalid>
      <Title>Account Manager (W&amp;B)</Title>
      <Description><![CDATA[<p>The Account Manager owns the commercial and relationship aspects of the post-sales journey across a portfolio of Digital Native and select Enterprise customers. You will drive renewals, identify and close upsell and cross-sell opportunities, and ensure customers achieve measurable adoption outcomes with Weights &amp; Biases (W&amp;B).</p>
<p>You will partner closely with Field Engineering (FE),who leads technical success,while you lead the commercial motions including renewal execution, usage-to-value alignment, growth pipeline creation, and multi-threaded stakeholder engagement.</p>
<p>This role requires comfort engaging highly technical personas (ML engineers, researchers, PhDs) and operating with autonomy in a rapidly evolving AI ecosystem; it is not a playbook-driven role.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning renewals, upsells, and cross-sells across your assigned accounts.</li>
<li>Building and maintaining detailed account plans including account maps, whitespace, usage trends, risks, and growth opportunities.</li>
<li>Generating growth pipeline by identifying new use cases, teams, and product opportunities within existing accounts.</li>
</ul>
<p>To be successful in this role, you will need to have a strong, genuine interest in AI/ML and the evolving machine learning ecosystem. You should also have high technical and product curiosity, be comfortable speaking with developers, ML engineers, and researchers, and have proven ability to drive growth motions (upsells, cross-sells) and manage retention in technical accounts.</p>
<p>Preferred qualifications include experience working with ML, MLOps, DevOps, or data infrastructure teams, familiarity with Git, Jupyter, Python, PyTorch, or cloud platforms (AWS, GCP, Azure), and exposure to AI-native companies, model builders, or generative AI workflows.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$95,000 to $130,000</Salaryrange>
      <Skills>Account Management, Renewals, Upselling, Cross-selling, Technical Account Management, ML, MLOps, DevOps, Data Infrastructure, Git, Jupyter, Python, PyTorch, Cloud Platforms</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a publicly traded company that provides a platform of technology, tools, and teams for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649877006</Applyto>
      <Location>San Francisco, CA / Sunnyvale, CA / New York, NY / Livingston, NJ</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>abff148c-cfd</externalid>
      <Title>Staff Machine Learning Engineer, GenAI Platform</Title>
      <Description><![CDATA[<p>As a Staff Machine Learning Engineer on the Machine Learning Platform team, you will be a key technical leader architecting and scaling our Generative AI and LLM platform capabilities.</p>
<p>Training and deploying foundation models places unprecedented demands on our systems. You will define the technical strategy and build the core infrastructure that enables machine learning engineers and researchers to seamlessly train, evaluate, and iterate on large language models at Reddit scale.</p>
<ul>
<li>Drive GenAI Infrastructure Strategy: Propose, design, and lead the architecture of our next-generation LLM platform, significantly advancing our capabilities to support large-scale foundation models that serve millions of redditors.</li>
<li>Design Resilient, Large-Scale Distributed Systems: Architect highly fault-tolerant training infrastructure capable of supporting multi-week, distributed workloads across massive GPU clusters.</li>
<li>Build Self-Serve LLM Workflows: Design and implement robust, production-grade pipelines for LLM fine-tuning (e.g., SFT, RLHF/DPO).</li>
<li>Develop Comprehensive Evaluation &amp; Benchmarking Infrastructure: Treat model evaluation as a first-class platform capability.</li>
<li>Architect Advanced Data Ingestion Pipelines: Extend our distributed data platforms to natively and efficiently handle the massive, multimodal datasets (text, image, video) required for modern GenAI workloads,</li>
</ul>
<p>You will have 10+ years of work experience in a production software development environment or building complex distributed data systems, plus a degree in ML, Engineering, Computer Science, or a related discipline.</p>
<p>GenAI/LLM Infrastructure Expertise: Proven track record of designing and operating large-scale ML systems, specifically working with distributed training frameworks (e.g., FSDP, DeepSpeed, Megatron-LM) and LLM serving/inference optimization (e.g., vLLM, TensorRT-LLM).</p>
<p>Distributed Systems Mastery: Hands-on experience managing fault-tolerant, petabyte-scale distributed systems and multi-node/multi-GPU training clusters.</p>
<p>Advanced MLOps Knowledge: Deep understanding of modern ML orchestration, fine-tuning pipelines, and model evaluation methodologies.</p>
<p>GPU Experience: Hands-on practice with CUDA environments, GPU virtualization/containerization, and doing it all within Kubernetes.</p>
<p>Production Engineering Fundamentals: Hands-on experience with Kubernetes, Docker, and building production-quality, object-oriented code in Python and/or Go.</p>
<p>Strong focus on scalability, reliability, performance, and ease of use.</p>
<p>You are an undying advocate for platform users and have a deep intuition for the machine learning development lifecycle.</p>
<p>Strong organizational &amp; communication skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$253,300-$354,600 USD</Salaryrange>
      <Skills>GenAI/LLM Infrastructure Expertise, Distributed Systems Mastery, Advanced MLOps Knowledge, GPU Experience, Production Engineering Fundamentals</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a social news and discussion website with over 121 million daily active unique visitors.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7772523</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3d57b93e-423</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, data architecture, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456948002</Applyto>
      <Location>Atlanta, Georgia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3b01c809-8ef</externalid>
      <Title>Staff Machine Learning Systems Engineer</Title>
      <Description><![CDATA[<p>As a Staff Machine Learning Systems Engineer at Reddit, you will lead the development of a platform for large-scale ML models. Your responsibilities will include designing end-to-end model lifecycle patterns (MLOps) to boost velocity of development for ML engineers, zero-to-one development and support of a graph ML codebase and platform, collaborating with ML engineers on performance tuning, optimizing batch data processing, and architecting pipelines to build and maintain massive graph data structures.</p>
<p>We are looking for an experienced engineer with 8+ years of experience in ML infrastructure, including model training and model deployments. You should have hands-on experience with ML optimization, cloud-based technologies, MLOps tools, and proficiency with common programming languages and frameworks of ML. Strong focus on scalability, reliability, performance, and ease of use is essential.</p>
<p>In addition to base salary, this job is eligible to receive equity in the form of restricted stock units, and depending on the position offered, it may also be eligible to receive a commission. Reddit offers a wide range of benefits to U.S.-based employees, including medical, dental, and vision insurance, 401(k) program with employer match, generous time off for vacation, and parental leave.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000-$322,000 USD</Salaryrange>
      <Skills>ML infrastructure, model training, model deployments, ML optimization, cloud-based technologies, MLOps tools, Python, PyTorch, Tensorflow, graph ML codebase and platform, Apache Beam, Apache Spark, Ray Data</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7731788</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7ba4251-36b</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>Job Title: Resident Solutions Architect - Public Sector</p>
<p>We are seeking a highly skilled Resident Solutions Architect to join our Professional Services team in Washington, D.C. As a Resident Solutions Architect, you will work with customers on short to medium-term customer engagements on their big data challenges using the Databricks platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
<li>Provide an escalated level of support for customer operational issues</li>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>Requirements:</p>
<ul>
<li>US Top Secret Clearance Required this position</li>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines</li>
<li>Documentation and white-boarding skills</li>
<li>Experience working with clients and managing conflicts</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD Zone 2 Pay Range $180,656-$248,360 USD Zone 3 Pay Range $180,656-$248,360 USD Zone 4 Pay Range $180,656-$248,360 USD</p>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p>Compliance</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis aloneabled</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, scope and timelines, documentation and white-boarding, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8356289002</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cbd81d47-d7e</externalid>
      <Title>Data Platform Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. This position may be offered as Senior Solutions Consultant, Resident Solutions Architect, or Senior Resident Solutions Architect. The final title will align to your experience, technical depth, and customer-facing ownership.</p>
<p>As a Big Data Solutions Architect (Internal Title - Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 10% of the time</li>
</ul>
<p>[Preferred] Databricks Certification but not essential</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8486738002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>219928ef-6de</externalid>
      <Title>Resident Solutions Architect - Healthcare &amp; Life Sciences</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494148002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8efd6b3b-251</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456973002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d94d7ea-9ca</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
</ul>
<ul>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461330002</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f18e7306-00c</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark and knowledge of Apache Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, Databricks, CI/CD, MLOps, technical project delivery, documentation, white-boarding, client management, conflict management, scalable streaming, batch solutions, cloud-native components</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a companies that provides data and AI solutions. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461325002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6d7f1a0-882</externalid>
      <Title>Resident Solutions Architect - Mumbai</Title>
      <Description><![CDATA[<p>We are seeking an experienced Resident Solution Architect (RSA) to join our Professional Services team and work directly with strategic customers on their data and AI transformation initiatives using the Databricks platform.</p>
<p>As an RSA, you will serve as a trusted technical advisor and hands-on expert, guiding customers to solve complex big data challenges using the Databricks platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating with customers to understand their data and AI transformation goals and developing tailored solutions using the Databricks platform</li>
<li>Designing and implementing scalable and secure data architectures using Apache Spark, Delta Lake, and other Databricks technologies</li>
<li>Providing expert-level technical guidance and support to customers during the implementation process</li>
<li>Identifying and addressing potential roadblocks and providing creative solutions to overcome them</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role</li>
<li>4+ years of experience as a Solution Architect creating designs, solving Big Data challenges for customers</li>
<li>Expertise in Apache Spark, distributed computing, and Databricks platform capabilities</li>
<li>Comfortable writing code in Python, PySpark, and Scala</li>
<li>Exceptional SQL, Spark SQL, Spark-streaming skills</li>
<li>Advanced knowledge of Spark optimizations, Delta, Databricks Lakehouse Platforms</li>
<li>Expertise in Azure</li>
<li>Expertise in NoSQL databases (MongoDB, Redis, HBase)</li>
<li>Expertise in data governance and security (Unity Catalog, RBAC)</li>
<li>Ability to work with Partner Organization and deliver complex programs</li>
<li>Ability to lead large technical delivery teams</li>
<li>Understands the larger competitive landscape, such as EMR, Snowflake, and Sagemaker</li>
<li>Experience of migration from On-prem / Cloud to Databricks is a plus</li>
<li>Excellent communication and client-facing consulting skills, with the ability to simplify complex technical concepts</li>
<li>Willingness to travel for onsite customer engagements within India</li>
<li>Documentation and white-boarding skills</li>
</ul>
<p>Good-to-have Skills:</p>
<ul>
<li>Experience with ML libraries/frameworks: Scikit-learn, TensorFlow, PyTorch</li>
<li>Familiarity with MLOps tools and processes, including MLflow for tracking and deployment</li>
<li>Experience delivering LLM and GenAI solutions at scale (RAG architectures, prompt engineering)</li>
<li>Extensive experience on Hadoop, Trino, Ranger and other open-source technology stack</li>
<li>Expertise on cloud platforms like AWS and GCP</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Kafka, Data Lakes, Python, PySpark, Scala, SQL, Spark SQL, Spark-streaming, Azure, NoSQL databases, data governance, security, Unity Catalog, RBAC, ML libraries/frameworks, MLOps tools and processes, LLM and GenAI solutions, Hadoop, Trino, Ranger, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8107166002</Applyto>
      <Location>Mumbai, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61b49b86-6c8</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>You will report to the regional Manager/Lead.</p>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8341313002</Applyto>
      <Location>New York City, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8664b981-66c</externalid>
      <Title>Data Platform Solutions Architect (Professional Services) - Emerging Enterprise &amp; DNB</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. Depending on experience and scope, this position may be offered as a Senior Solutions Consultant or a Resident Solutions Architect. You may know this role as a Big Data Solutions Architect, Analytics Architect, Data Platform Architect, or Technical Consultant. The final title will align to your experience, technical depth, and customer-facing ownership.</p>
<p>As a Data Platform Solutions Architect on our Professional Services team for the Emerging Enterprise &amp; Digital Natives business in EMEA, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Drive high-impact customer projects: Design and build reference architectures, implement production use cases, and create “how-to” guides tailored to the unique needs of fast-moving Emerging Enterprise &amp; Digital Native customers in EMEA.</li>
</ul>
<ul>
<li>Collaborate on project scoping: Work closely with Engagement Managers and customers to define project scope, schedules, and deliverables for professional services engagements.</li>
</ul>
<ul>
<li>Enable transformational initiatives: Guide strategic customers through their end-to-end big data journeys,migrating from legacy platforms and deploying industry-leading data and AI applications on the Databricks platform.</li>
</ul>
<ul>
<li>Consult on architecture &amp; design: Provide thought leadership on solution design and implementation strategies, ensuring customers can successfully evaluate and adopt Databricks.</li>
</ul>
<ul>
<li>Offer advanced support: Serve as an escalation point for operational issues, collaborating with Databricks Support and Engineering to resolve challenges quickly.</li>
</ul>
<ul>
<li>Align technical delivery: Partner with cross-functional Databricks teams (Technical, PM, Architecture, and Customer Success) to align on milestones, ensuring customer needs and deadlines are met.</li>
</ul>
<ul>
<li>Amplify product feedback: Provide implementation insights to Databricks Product and Support teams, guiding rapid improvements in features and troubleshooting for customers.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 10% of the time</li>
</ul>
<ul>
<li>[Preferred] Databricks Certification but not essential</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8439047002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6ea8bf6b-ef6</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494153002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2afc821d-248</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494149002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34a0bf55-11a</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines</li>
<li>Documentation and white-boarding skills</li>
<li>Experience working with clients and managing conflicts</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461222002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>62b2a5a2-9bd</externalid>
      <Title>Big Data Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>As a Big Data Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Working on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Working with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guiding strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consulting on architecture and design; bootstrapping or implementing customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
</ul>
<ul>
<li>Providing an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Working with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Working with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Strong expertise in data warehousing concepts, architecture, and migration strategies</li>
</ul>
<ul>
<li>Comfortable writing code in either Python, Pyspark or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Data Science expertise is a nice-to-have</li>
</ul>
<ul>
<li>Travel to customers 10-20% of the time</li>
</ul>
<ul>
<li>Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, data warehousing, migration strategies, Python, Pyspark, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8482697002</Applyto>
      <Location>Paris, France</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5aacaad3-05b</externalid>
      <Title>Senior Machine Learning Engineer, Payments</Title>
      <Description><![CDATA[<p>Job Title: Senior Machine Learning Engineer, Payments</p>
<p>Location: Remote-USA</p>
<p>The Payments team at Airbnb is responsible for everything related to settling money in Airbnb&#39;s global marketplace. As a Senior Machine Learning Engineer for Payments, you will be the catalyst that transforms bold AI innovation into production systems that make Airbnb Payment experience feel effortless and secure.</p>
<p>Responsibilities:</p>
<ul>
<li>Spearhead LLM agents, real-time anomaly detectors, and other breakthrough solutions that solve real-world problems and create product magic.</li>
</ul>
<ul>
<li>Collaborate with product, engineering, ops, and data science to spot high-leverage opportunities, refine AI/ML requirements, make principled architecture choices, and measure business value with clear, data-driven metrics.</li>
</ul>
<ul>
<li>Design, train, deploy, and operate large-scale AI applications for both batch and streaming workloads, ensuring low latency, high reliability, and continuous improvement via automated monitoring and retraining loops.</li>
</ul>
<ul>
<li>Mentor and inspire teammates, fostering a collaborative, experimentation-driven environment where cutting-edge research meets production excellence and every engineer is empowered to push AI boundaries at Airbnb.</li>
</ul>
<p>Your Expertise:</p>
<ul>
<li>5+ years of industry experience in applied AI/ML, inclusive MS or PhD in relevant fields.</li>
</ul>
<ul>
<li>Strong programming (Python/Java) and data engineering skills.</li>
</ul>
<ul>
<li>Proven mastery of modern AI/LLM workflows , prompt engineering, fine-tuning (LoRA, RLHF), hallucination mitigation, safety guardrails, and rigorous online/offline testing to minimize training/inference drift and ensure reliable outcomes.</li>
</ul>
<ul>
<li>Hands-on experience with at least three of the following: PyTorch/TensorFlow, scalable inference stacks, vector search, orchestration/MLOps platforms (Kubeflow, Airflow), large-scale data streaming &amp; processing (Spark, Ray, Kafka).</li>
</ul>
<ul>
<li>Demonstrated success designing, deploying, and monitoring production AI systems , e.g., personalization engines, generative content services , complete with drift/cost/latency monitoring, automated retraining triggers, and cross-functional collaboration that translates ambiguous business needs into measurable AI impact.</li>
</ul>
<ul>
<li>Prior knowledge of AI/ML applications in the Payments domain is highly desirable.</li>
</ul>
<p>Our Commitment To Inclusion &amp; Belonging:</p>
<p>Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively led people, and to develop the best products, services, and solutions.</p>
<p>How We&#39;ll Take Care of You:</p>
<p>Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs, and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Pay Range: $191,000-$223,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$191,000-$223,000 USD</Salaryrange>
      <Skills>Python, Java, PyTorch, TensorFlow, scalable inference stacks, vector search, orchestration/MLOps platforms, large-scale data streaming &amp; processing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest online marketplaces in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7755758</Applyto>
      <Location>Remote-USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d723067-22d</externalid>
      <Title>Resident Solutions Architect - Healthcare &amp; Life Sciences</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494144002</Applyto>
      <Location>Dallas, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7e5c6f46-bb6</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456975002</Applyto>
      <Location>Dallas, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b4a461d1-b6b</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a company that provides a data and AI platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494128002</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64176983-af0</externalid>
      <Title>Research Engineer, Reward Models Platform</Title>
      <Description><![CDATA[<p>You will work as a Research Engineer on Anthropic&#39;s Reward Models Platform. Your primary responsibility will be to design and build infrastructure that enables researchers to rapidly iterate on reward signals. This includes tools for rubric development, human feedback data analysis, and reward robustness evaluation. You will also develop systems for automated quality assessment of rewards, including detection of reward hacks and other pathologies. Additionally, you will create tooling that allows researchers to easily compare different reward methodologies and understand their effects. You will collaborate with researchers to translate science requirements into platform capabilities and optimize existing systems for performance, reliability, and ease of use.</p>
<p>You will have the opportunity to contribute directly to research projects yourself and have a direct impact on our ability to scale reward development across domains. You will work closely with researchers and translate ambiguous requirements into well-scoped engineering projects.</p>
<p>To be successful in this role, you should have prior research experience and be excited to work closely with researchers. You should have strong Python skills and experience with ML workflows and data pipelines, and building related infrastructure/tooling/platforms. You should be comfortable working across the stack, ranging from data pipelines to experiment tracking to user-facing tooling.</p>
<p>Strong candidates may also have experience with ML research, building internal tooling and platforms for ML researchers, data quality assessment and pipeline optimization, experiment tracking, evaluation frameworks, or MLOps tooling. They may also have experience with large-scale data processing, Kubernetes, distributed systems, or cloud infrastructure, and familiarity with reinforcement learning or fine-tuning workflows.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$500,000 USD</Salaryrange>
      <Skills>Python, ML workflows, data pipelines, infrastructure/tooling/platforms, rubric development, human feedback data analysis, reward robustness evaluation, automated quality assessment, reward hacks, pathologies, experiment tracking, evaluation frameworks, MLOps tooling, ML research, building internal tooling and platforms for ML researchers, data quality assessment and pipeline optimization, Kubernetes, distributed systems, cloud infrastructure, reinforcement learning, fine-tuning workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that develops artificial intelligence systems. It was founded by a group of researchers and engineers.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5024831008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32d8d11d-9dc</externalid>
      <Title>Resident Solutions Architect - Healthcare &amp; Life Sciences</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8371312002</Applyto>
      <Location>New York City, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3e92e8a2-811</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494130002</Applyto>
      <Location>New York City, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9d5fcc78-b2b</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Python, Scala, AWS, Azure, GCP, distributed computing, Spark runtime internals</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8423296002</Applyto>
      <Location>Central - United States; Northeast - United States; Southeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4cd630c8-77d</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>Job Title: Resident Solutions Architect - Public Sector</p>
<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers get the most value out of their data.</p>
<p>Responsibilities:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope various professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build, and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which lead to a customer&#39;s successful understanding, evaluation, and adoption of Databricks</li>
<li>Provide an escalated level of support for customer operational issues</li>
<li>Work with the Databricks technical team, Project Manager, Architect, and Customer team to ensure the technical components of the engagement are delivered to meet customer needs</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement-specific product and support issues</li>
</ul>
<p>Requirements:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines</li>
<li>Documentation and white-boarding skills</li>
<li>Experience working with clients and managing conflicts</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Pay Range Transparency:</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range for this role is $180,656-$248,360 USD.</p>
<p>About Databricks:</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide - including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 - rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics, and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</p>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation, white-boarding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform. The company was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494137002</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8131cff5-1a9</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD Zone 2 Pay Range $180,656-$248,360 USD Zone 3 Pay Range $180,656-$248,360 USD Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8341311002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8d8b3af4-285</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494147002</Applyto>
      <Location>Atlanta, Georgia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b647b7da-f8f</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>US Top Secret Clearance Required this position</li>
</ul>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494107002</Applyto>
      <Location>Virginia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8d1ca2f5-7be</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461220002</Applyto>
      <Location>Chicago, Illinois</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1d222227-15b</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines</li>
<li>Documentation and white-boarding skills</li>
<li>Experience working with clients and managing conflicts</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Databricks Certification</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD Zone 2 Pay Range $180,656-$248,360 USD Zone 3 Pay Range $180,656-$248,360 USD Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456969002</Applyto>
      <Location>Chicago, Illinois</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6860353a-782</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, big data, AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461241002</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6fed2bb6-3b6</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Designing and building reference architectures for customers</li>
<li>Creating how-to&#39;s and productionalizing customer use cases</li>
<li>Guiding strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consulting on architecture and design; bootstrapping or implementing customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
<li>Providing an escalated level of support for customer operational issues</li>
<li>Working with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
<li>Working with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>To be successful in this role, you will need:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines</li>
<li>Documentation and white-boarding skills</li>
<li>Experience working with clients and managing conflicts</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>The pay range for this role is $180,656-$248,360 USD per year, depending on location and experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD per year</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461321002</Applyto>
      <Location>Chicago, Illinois</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eb3ba652-daa</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461163002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7179545-496</externalid>
      <Title>Resident Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Resident Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Travel to customers 10% of the time</li>
</ul>
<ul>
<li>[Preferred] Databricks Certification but not essential</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, technical project delivery, documentation, white-boarding, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company behind the Databricks Data Intelligence Platform, used by over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8367942002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d0793a44-d91</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461328002</Applyto>
      <Location>Charlotte, North Carolina</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5fd85b1e-563</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
<li>Nice to have: Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456965002</Applyto>
      <Location>Dallas, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3827f936-fc2</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>Job Title: Resident Solutions Architect - Financial Services</p>
<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
</ul>
<ul>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>
<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow.</p>
<p>To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees.</p>
<p>For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel.</p>
<p>We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p>Compliance</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, Cloud ecosystems, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461326002</Applyto>
      <Location>New York City, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9223ca6d-d9e</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, Python, Scala, CI/CD, MLOps, distributed computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461193002</Applyto>
      <Location>Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8d65cea1-fd1</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461219002</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>507bea17-ad7</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461251002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>52542071-2d3</externalid>
      <Title>Senior AI Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior AI Engineer to architect, build, and scale AI-powered systems that redefine how Yuno operates and innovates.</p>
<p>This role goes beyond building models; it&#39;s about designing agentic systems, AI-assisted workflows, and data-driven decision engines that help the company scale faster than headcount.</p>
<p>You will be at the forefront of developing the next generation of autonomous systems that power both customer-facing products and internal operations, from optimizing payment experiences to accelerating development, testing, troubleshooting, and product creation.</p>
<p>Responsibilities:</p>
<ul>
<li><p>Architect, build, and deploy LLM-powered applications that augment and automate key workflows across Yuno, from customer service and new product development to internal agentic workflows that optimize critical company processes.</p>
</li>
<li><p>Design autonomous AI systems that can execute technical analysis, testing, troubleshooting, and decision-making at scale.</p>
</li>
<li><p>Develop AI-driven tools that create measurable business impact, improving efficiency, accelerating innovation, and driving revenue growth.</p>
</li>
</ul>
<p>Operational Intelligence &amp; Scalability:</p>
<ul>
<li><p>Identify areas across Yuno where AI can generate efficiency gains, reducing manual work, optimizing processes, and enabling smarter operations.</p>
</li>
<li><p>Build AI systems that enable Yuno to scale without exponential growth in headcount, thereby creating sustainable productivity and organizational intelligence.</p>
</li>
</ul>
<p>Data &amp; Model Optimization:</p>
<ul>
<li><p>Lead fine-tuning and contextual optimization of AI models using Yuno&#39;s proprietary data.</p>
</li>
<li><p>Continuously refine performance through structured feedback loops, observability metrics, and user interaction data.</p>
</li>
<li><p>Ensure adherence to privacy, compliance, and ethical AI principles in all model development.</p>
</li>
</ul>
<p>Cross-Functional Collaboration &amp; Innovation:</p>
<ul>
<li><p>Work closely with data, product, and infrastructure teams to define the long-term AI roadmap for Yuno.</p>
</li>
<li><p>Experiment with emerging AI technologies to identify new capabilities that can drive differentiation and strategic advantage.</p>
</li>
<li><p>Contribute to a culture of experimentation, rapid iteration, and continuous learning across the AI function.</p>
</li>
</ul>
<p>The skills you need:</p>
<ul>
<li><p>5+ years of professional experience in AI/ML development, with at least 2 years focused on LLMs, RAG, or agentic systems.</p>
</li>
<li><p>Strong engineering and product mindset, capable of balancing technical depth with strategic impact.</p>
</li>
<li><p>Exceptional communication and collaboration skills to work across product, engineering, and data functions.</p>
</li>
<li><p>Passion for pushing the boundaries of applied AI and creating systems that drive real business transformation.</p>
</li>
<li><p>Willing to work from the Hyderabad office.</p>
</li>
</ul>
<p>Minimum Qualifications:</p>
<ul>
<li><p>LLMs &amp; RAG: Proven experience designing and deploying systems using models like GPT, Claude, Gemini, or similar, including Retrieval-Augmented Generation (RAG) pipelines and contextual retrieval systems.</p>
</li>
<li><p>AI Agents &amp; Multi-Agent Systems: Hands-on experience with Crew.ai, LangChain, LangGraph, or similar frameworks to build orchestrated agentic workflows.</p>
</li>
<li><p>Fine-Tuning &amp; Context Engineering: Expertise in supervised fine-tuning (SFT), LoRA, or custom dataset adaptation for domain-specific tasks.</p>
</li>
<li><p>Programming &amp; Frameworks: Proficiency in Python, GO, and AI/ML libraries such as PyTorch, TensorFlow, or JAX.</p>
</li>
<li><p>AI Infrastructure: Experience designing scalable, production-ready AI systems in AWS, GCP, or Azure, with a deep understanding of vector databases, model serving, and inference optimization.</p>
</li>
<li><p>Observability &amp; Monitoring: Familiarity with LangSmith, LangFuse, or equivalent tools for tracking, debugging, and evaluating LLM performance.</p>
</li>
<li><p>API Integration: Expertise in integrating AI systems with RESTful APIs and internal platforms to create seamless, usable products.</p>
</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li><p>Experience using LlamaIndex, Hugging Face, or similar open-source frameworks.</p>
</li>
<li><p>Strong understanding of context optimization, embedding strategies, and knowledge retrieval.</p>
</li>
<li><p>Familiarity with MLOps, CI/CD pipelines, and production deployment best practices.</p>
</li>
<li><p>Prior experience in fintech, payments, or other data-intensive and regulated industries.</p>
</li>
<li><p>Analytical rigor and ability to translate business objectives into measurable AI outcomes.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>LLMs &amp; RAG, AI Agents &amp; Multi-Agent Systems, Fine-Tuning &amp; Context Engineering, Programming &amp; Frameworks, AI Infrastructure, Observability &amp; Monitoring, API Integration, Experience using LlamaIndex, Hugging Face, or similar open-source frameworks, Strong understanding of context optimization, embedding strategies, and knowledge retrieval, Familiarity with MLOps, CI/CD pipelines, and production deployment best practices, Prior experience in fintech, payments, or other data-intensive and regulated industries, Analytical rigor and ability to translate business objectives into measurable AI outcomes</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Yuno</Employername>
      <Employerlogo>https://logos.yubhub.co/yuno.com.png</Employerlogo>
      <Employerdescription>Yuno is a payment orchestration company building infrastructure for frictionless, global transactions.</Employerdescription>
      <Employerwebsite>https://www.yuno.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/yuno/11c53d1c-9bf0-41c7-8547-35ad174dacb2</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>d5b743bb-d8f</externalid>
      <Title>Product Manager, AI Platforms</Title>
      <Description><![CDATA[<p>The AI Platform Product Manager will drive the strategy and execution of Shield AI&#39;s next-generation autonomy intelligence stack. This PM owns the product vision and roadmap for the Hivemind AI Platform, ensuring we can manufacture, govern, and field advanced world models, robotics foundation models, and vision-language-action systems safely and at scale.</p>
<p>This role sits at the intersection of AI/ML, autonomy, model lifecycle, infrastructure, and product strategy. The PM partners closely with engineering, AI research, Hivemind Solutions, and field teams to deliver the tooling that enables sovereign autonomy, AI Factories at the edge, and continuous learning,capabilities that are central to Shield AI&#39;s strategic direction.</p>
<p>This is a high-impact role for an experienced product leader excited to define how foundation models are trained, validated, governed, and deployed across thousands of autonomous systems in highly contested environments.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>AI Model Development &amp; Training Platform</li>
</ul>
<p>Own the roadmap for foundation model training workflows, including dataset ingestion, curation, labeling, synthetic data generation, domain model training, and distillation pipelines. Define requirements for world models, robotics models, and VLA-based training, evaluation, and specialization. Lead the evolution of MLOps capabilities in Forge, including data lineage, experiment tracking, model versioning, and scalable evaluation suites.</p>
<ul>
<li>Data, Simulation &amp; Synthetic Data Factory</li>
</ul>
<p>Define product requirements for synthetic data generation, simulation-integrated data flywheels, and automated scenario generation. Partner with Digital Twin, Simulation, and autonomy teams to convert natural-language mission inputs into data needs, training procedures, and model variants.</p>
<ul>
<li>Safe Deployment &amp; Model Governance</li>
</ul>
<p>Lead the development of model governance and auditability tooling, including model cards, dataset rights, lineage tracking, safety gates, and compliance evidence. Build guardrails and workflows to safely deploy models onto edge hardware in disconnected, GPS- or comms-denied environments. Partner with Safety, Certification, Cyber, and Engineering teams to ensure traceability and evaluation pipelines meet operational and accreditation requirements.</p>
<ul>
<li>Edge Deployment &amp; AI Factory Integration</li>
</ul>
<p>Partner with Pilot, EdgeOS, and hardware teams to integrate foundation-model-based perception and reasoning into autonomy behaviors. Define requirements for distillation, quantization, and inference tooling as part of the “three-computer” development and deployment model. Ensure closed-loop workflows between cloud model training and edge-native execution.</p>
<ul>
<li>Cross-Functional Leadership</li>
</ul>
<p>Collaborate with Engineering, Research, Product, Customer Engagement, and Solutions teams to ensure model outputs meet mission and platform constraints. Translate advanced AI capabilities into intuitive workflows that platform OEMs and partner nations can use to build sovereign AI factories. Sequence foundational capabilities that unblock autonomy, simulation, and customer-facing product teams.</p>
<ul>
<li>User &amp; Customer Impact</li>
</ul>
<p>Develop deep empathy for ML engineers, autonomy developers, and Solutions engineers who rely on the platform. Capture operational data gaps, mission-driven model needs, and domain-specific specialization requirements. Lead demos and onboarding for model-development capabilities across internal and external teams.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,000 - $290,000 a year</Salaryrange>
      <Skills>AI Model Development &amp; Training Platform, Data, Simulation &amp; Synthetic Data Factory, Safe Deployment &amp; Model Governance, Edge Deployment &amp; AI Factory Integration, Cross-Functional Leadership, User &amp; Customer Impact, Strong engineering background, Deep understanding of foundation models, robotics models, multimodal models, MLOps, and training infrastructure, Experience managing complex products spanning data pipelines, cloud training clusters, model governance, and edge deployments, Proven success partnering with research teams to transition ML innovations into stable, production-grade workflows, Experience working on autonomy, robotics, embedded AI, or mission-critical systems, Hands-on familiarity with GPU infrastructure, distributed training, or data lakehouse architectures, Experience supporting defense, dual-use, or safety-critical AI systems, Background designing or operating AI Factory–style pipelines (data → training → evaluation → distillation → edge deployment), Advanced degree in engineering, ML/AI, robotics, or a related field</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/7886f437-2d5e-4616-8dcb-3dc488f1f585</Applyto>
      <Location>San Diego</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>b3289639-f91</externalid>
      <Title>Machine Learning Engineer, Open-Source Software</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>We believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>Role Summary</p>
<p>You will be in charge of open-sourcing state-of-the-art models, whilst maintaining and improving Mistral’s publicly available libraries. Your work is critical in helping turn research breakthroughs into tangible solutions and improve Mistral&#39;s open-source ecosystem.</p>
<p>Responsibilities</p>
<p>• Releasing our models to open-source platforms and libraries, e.g., vLLM, GitHub, Hugging Face
• Maintaining Mistral’s open-source libraries (mistral-common, mistral-finetune, mistral-inference)
• Create and maintain tooling and services: both internal facing (internal research) and external facing (open-source libraries)
• Implement and optimize open-source and internal libraries for performance and accuracy, ensuring production readiness and employing cutting-edge technology and innovative approaches
• Collaborate with the open-source community (PyTorch, vLLM, Hugging Face)</p>
<p>About You</p>
<p>• Master’s degree in Computer Science, Machine Learning, Data Science, or a related field
• Experience contributing to popular open-source libraries such as PyTorch, Tensorflow, JAX, vLLM, Transformers, Llama.cpp, ...
• Passion for contributing to the open-source software ecosystem
• Expert programming skills in Python, PyTorch, MLOps
• Adaptable, proactive, and autonomous
• Attention to detail and a drive to go the last mile to build almost perfect tools
• Deep understanding of machine learning approaches, especially LLMs and algorithms
• Low-ego, collaborative and have a real team player mindset</p>
<p>Now, it would be ideal if you have:</p>
<p>• Experience with training and fine-tuning large language models (e.g., distillation, supervised fine-tuning, policy optimization)
• Experience working with Slurm
• Worked with research teams before
• Experience as a core-maintainer of a popular ML open-source library</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, MLOps, Open-source software development, Machine learning, Large language models, Slurm, Experience with training and fine-tuning large language models, Experience working with Slurm, Research team experience, Core-maintainer of a popular ML open-source library</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, optimized, open-source and cutting-edge AI models, products and solutions for enterprise use.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/ef4c26fc-3fdb-4dd2-a64e-95264ee769dd</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2f08a0c9-5e8</externalid>
      <Title>Engineering Manager</Title>
      <Description><![CDATA[<p>We are seeking experienced Engineering Managers to join our team and drive the development of mission-critical systems powering our AI platform and products. In this role, you will leverage your deep technical expertise and leadership skills to build scalable, high-impact systems while nurturing and guiding high-performing engineering teams.</p>
<p>Depending on your background, you may oversee one (Team Lead) or multiple (Engineering Manager) product teams, driving both execution and delivery while staying hands-on. Our product engineering teams typically consist of five members.</p>
<p>As a team lead, you will dynamically balance individual contributions, project leadership, and team management, with the balance shifting as your team grows. You will work closely with Product Managers to deliver features that directly benefit our users and business.</p>
<p>Your responsibilities will include fostering a strong team culture, establishing clear processes, elevating engineering standards, and ensuring consistent, high-quality delivery.</p>
<p>This is a highly critical role at the intersection of product engineering infrastructure and people. You will help shape how our engineering organization works, how we build at scale and how we turn cutting-edge AI research into reliable production-grade systems used by millions of developers and end users.</p>
<p>If you enjoy building complex systems, leading ambitious engineers and shipping products that matter, this role is for you.</p>
<p>Key components of our technology include:</p>
<ul>
<li><p>Empower Your Team(s): Act as the clear point of contact for your team and its projects. Remove obstacles, unblock execution and create an environment where engineers can focus on building impactful high-quality products.</p>
</li>
<li><p>Lead Delivery and Accountability: Own the delivery of your team’s roadmap and ensure projects are shipped with the right level of quality reliability and performance. Keep the team accountable for results and maintain strong execution momentum.</p>
</li>
<li><p>Bridge Product and Technology: Partner closely with Product Managers to define the roadmap and priorities of your team. Scope initiatives, estimate engineering effort and drive prioritization decisions based on business and technical impact.</p>
</li>
<li><p>Hands-on Technical Leadership: Stay deeply involved in architecture design, system design, coding, peer reviews and technical decision-making. Lead by example and remain connected to the day-to-day challenges of the team.</p>
</li>
<li><p>Process and Clarity: Define and evolve team processes, best practices and execution rituals with the team. Ensure they remain relevant as the team and product scale.</p>
</li>
<li><p>Raise the Engineering Bar: Challenge engineers to step up, take ownership and grow beyond their comfort zone. Increase the team’s bus factor through knowledge sharing, documentation and rotation of responsibilities.</p>
</li>
</ul>
<p>About you:</p>
<ul>
<li><p>5+ years of relevant professional work experience.</p>
</li>
<li><p>Master’s degree in Computer Science, Information Technology or a related field.</p>
</li>
<li><p>Proficiency in one of the following languages: Python, JavaScript/TypeScript, C#, Golang.</p>
</li>
<li><p>Experience with building and leading high-performing teams.</p>
</li>
<li><p>Experience with working with cross-functional teams like product, design, business, etc.</p>
</li>
<li><p>Experience with project management and planning.</p>
</li>
<li><p>Ownership and capacity to ship products end-to-end.</p>
</li>
<li><p>Strong problem-solving abilities and attention to detail.</p>
</li>
<li><p>Excellent communication skills.</p>
</li>
<li><p>Low ego and team spirit mindset.</p>
</li>
<li><p>Autonomous and self-starter.</p>
</li>
</ul>
<p>Ideal if you have experience with:</p>
<ul>
<li><p>Experience with Platform &amp; DX products (API, SDK, Tooling, Observability, Billing, etc.)</p>
</li>
<li><p>AI/ML/MLOps engineering</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, JavaScript/TypeScript, C#, Golang, experience with building and leading high-performing teams, experience with working with cross-functional teams like product, design, business, etc., experience with project management and planning, ownership and capacity to ship products end-to-end, strong problem-solving abilities and attention to detail, excellent communication skills, low ego and team spirit mindset, autonomous and self-starter, experience with Platform &amp; DX products (API, SDK, Tooling, Observability, Billing, etc.), AI/ML/MLOps engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a tech company that develops AI-powered products and solutions for enterprise use. It has a global presence with teams in France, USA, UK, Germany, and Singapore.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/add3ec37-a655-4a60-8823-1e871aa1e9b2</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>eb286b66-47b</externalid>
      <Title>AI Scientist</Title>
      <Description><![CDATA[<p>About this role</p>
<p>We&#39;re looking for an AI Scientist to join our research team in Zurich. As an AI Scientist, you will research and develop novel methods to push the frontier of large language models. You will work across use cases (e.g., reasoning, code, agents) and modalities (e.g., text, image, and speech). You will build tooling and infrastructure to allow training, evaluation, and analysis of AI models at scale. You will also work cross-functionally with other scientists, engineers, and product teams to ship AI systems that have a real-world impact.</p>
<p>About you</p>
<ul>
<li>You are a highly proficient software engineer in at least one programming language (Python or other, e.g., Rust, Go, Java).</li>
<li>You have hands-on experience with AI frameworks (e.g., PyTorch, JAX) or distributed systems (e.g., Ray, Kubernetes).</li>
<li>You have high engineering competence. This means being able to design complex software and make it usable in production.</li>
<li>You are a self-starter, autonomous, and a team player.</li>
</ul>
<p>Now, it would be ideal if</p>
<ul>
<li>You have hands-on experience with training large transformer models in a distributed fashion.</li>
<li>You are able to navigate the full MLOps stack, for instance, fine-tuning, evaluation, and deployment.</li>
<li>You have a strong publication record in a relevant scientific domain.</li>
</ul>
<p>Benefits</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Food: Monthly meal allowance</li>
<li>Sport: Monthly contribution to a Gympass subscription</li>
<li>Private pension plan</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, JAX, Rust, Go, Java, Ray, Kubernetes, Training large transformer models in a distributed fashion, Navigating the full MLOps stack, Strong publication record in a relevant scientific domain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops and provides high-performance, open-source AI models, products, and solutions for enterprise and personal use.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/bedfc2aa-f1b6-4136-bd17-b3abe4c06120</Applyto>
      <Location>Zurich</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>423610cf-92b</externalid>
      <Title>Applied AI, Technical Lead, Forward Deployed AI Engineer - EMEA</Title>
      <Description><![CDATA[<p>About Mistral AI
Mistral AI is seeking a Technical Lead, Applied AI to drive the technical strategy, execution, and delivery of complex AI solutions for our enterprise customers.</p>
<p>In this role, you will lead a project teams of Applied AI Engineers, ensuring the successful deployment of Mistral AI products and the development of high-impact, scalable AI use cases. You will act as the primary technical point of contact for our most strategic customers, guiding them through the entire lifecycle,from pre-sales to post-implementation,while collaborating closely with research, product, and engineering teams to shape the future of our offerings.</p>
<p>As a Technical Lead, you will bridge the gap between cutting-edge AI research and real-world enterprise applications, ensuring our solutions are robust, scalable, and aligned with both customer needs and Mistral’s technological vision.</p>
<p>Responsibilities</p>
<ul>
<li><p>Deliver as an IC the critical lines of codes of our complex projects, you’ll be hands-on and de-risk the critical parts of our complex projects. You’ll stay deeply involved in coding, reviewing, and optimizing AI solutions.</p>
</li>
<li><p>Lead technical teams of Applied AI Engineers, providing mentorship, technical guidance, and best practices for deploying state-of-the-art GenAI applications across industries.</p>
</li>
<li><p>Lead technical discussions during pre-sales, translating customer requirements into actionable solutions and communicating Mistral’s technological advantages to diverse stakeholders.</p>
</li>
<li><p>Design and oversee the implementation of complex AI systems, including fine-tuning, RAG, agentic workflows, and custom LLM applications, ensuring alignment with Mistral’s product roadmap and open-source initiatives.</p>
</li>
<li><p>Drive innovation by identifying emerging trends in AI, evaluating new tools and methodologies, and championing best practices for fine-tuning, inference, and deployment.</p>
</li>
<li><p>Work closely with product managers, researchers, and engineers to ensure seamless integration of customer feedback into Mistral’s product development cycle.</p>
</li>
</ul>
<p>How We Work in Applied AI</p>
<ul>
<li><p>We care about people and outputs. What matters is what you ship, not the time you spend on it.</p>
</li>
<li><p>Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to. The best idea wins, whether it comes from a principal engineer or someone in their first week.</p>
</li>
<li><p>Always ask why. The best solutions come from deep understanding, not from copying what worked before.</p>
</li>
<li><p>We say what we mean. Feedback is direct, timely, and given because we care.</p>
</li>
<li><p>No politics. Low ego, high standards.</p>
</li>
<li><p>We embrace an unstructured environment and find joy in it.</p>
</li>
</ul>
<p>About You</p>
<ul>
<li><p>You are fluent in English.</p>
</li>
<li><p>You hold a PhD or Master’s degree in AI, Machine Learning, Computer Science, or a related field.</p>
</li>
<li><p>You have 7/8+ years of experience in AI/ML, with at least 2+ years in a technical leadership role (e.g., Tech Lead, Engineering Manager, or Solutions Architect) focused on AI products or enterprise solutions.</p>
</li>
<li><p>You have a proven track record of leading teams to deliver complex AI projects, from prototyping to production, in industries such as tech, finance, healthcare, or industrial automation.</p>
</li>
<li><p>You possess deep expertise in fine-tuning LLMs, advanced RAG, agentic systems, and deploying NLP applications at scale.</p>
</li>
<li><p>You are proficient in Python, PyTorch, and modern AI frameworks (e.g., LangChain, Hugging Face).</p>
</li>
</ul>
<p>Experience with cloud platforms (AWS, GCP, Azure) and MLOps tools is a plus.</p>
<ul>
<li><p>You have strong software engineering skills, including API design, backend/full-stack development, and system architecture.</p>
</li>
<li><p>You excel in technical communication, with the ability to articulate complex concepts to both technical and non-technical audiences, including executives and engineers.</p>
</li>
<li><p>You thrive in fast-paced, collaborative environments and are passionate about mentoring and growing technical talent.</p>
</li>
</ul>
<p>Ideally, you have:</p>
<ul>
<li><p>Contributed to open-source projects, particularly in the LLM or AI space.</p>
</li>
<li><p>Experience in customer-facing roles (e.g., Solutions Architect, Customer Engineer, or Technical Product Manager) with a focus on enterprise AI adoption.</p>
</li>
<li><p>A track record of driving technical strategy and influencing product direction based on customer needs and market opportunities.</p>
</li>
</ul>
<p>Why Join Us?</p>
<p>You’ll have the opportunity to shape the future of AI adoption in enterprises, work with a world-class team, and contribute to open-source projects that impact millions. If you’re excited about leading technical innovation and solving real-world challenges with AI, we’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, LangChain, Hugging Face, AI, Machine Learning, Cloud Platforms, MLOps Tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides AI technology for enterprise customers. It has a workforce distributed across several countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/e2cf255f-49c8-4630-afe0-7f665f51f01f</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>9b715683-620</externalid>
      <Title>Applied AI, Technical Lead, Forward Deployed AI Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a company that democratizes AI through high-performance, optimized, open-source and cutting-edge models, products, and solutions. Our offerings include le Chat, the AI assistant for life and work.</p>
<p>About The Job</p>
<p>Mistral AI is seeking a Technical Lead, Applied AI to drive the technical strategy, execution, and delivery of complex AI solutions for our enterprise customers. In this role, you will lead a project teams of Applied AI Engineers, ensuring the successful deployment of Mistral AI products and the development of high-impact, scalable AI use cases.</p>
<p>Responsibilities</p>
<ul>
<li><p>Deliver as an IC the critical lines of codes of our complex projects, you’ll be hands-on and de-risk the critical parts of our complex projects. You’ll stay deeply involved in coding, reviewing, and optimizing AI solutions.</p>
</li>
<li><p>Lead technical teams of Applied AI Engineers, providing mentorship, technical guidance, and best practices for deploying state-of-the-art GenAI applications across industries.</p>
</li>
<li><p>Lead technical discussions during pre-sales, translating customer requirements into actionable solutions and communicating Mistral’s technological advantages to diverse stakeholders.</p>
</li>
<li><p>Design and oversee the implementation of complex AI systems, including fine-tuning, RAG, agentic workflows, and custom LLM applications, ensuring alignment with Mistral’s product roadmap and open-source initiatives.</p>
</li>
<li><p>Drive innovation by identifying emerging trends in AI, evaluating new tools and methodologies, and championing best practices for fine-tuning, inference, and deployment.</p>
</li>
<li><p>Work closely with product managers, researchers, and engineers to ensure seamless integration of customer feedback into Mistral’s product development cycle.</p>
</li>
</ul>
<p>How We Work in Applied AI</p>
<ul>
<li><p>We care about people and outputs.</p>
</li>
<li><p>What matters is what you ship, not the time you spend on it</p>
</li>
<li><p>Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to. The best idea wins, whether it comes from a principal engineer or someone in their first week.</p>
</li>
<li><p>Always ask why. The best solutions come from deep understanding, not from copying what worked before</p>
</li>
<li><p>We say what we mean. Feedback is direct, timely, and given because we care.</p>
</li>
<li><p>No politics. Low ego, high standards.</p>
</li>
<li><p>We embrace an unstructured environment and find joy in it.</p>
</li>
</ul>
<p>About You</p>
<ul>
<li><p>You are fluent in French and English.</p>
</li>
<li><p>You hold a PhD or Master’s degree in AI, Machine Learning, Computer Science, or a related field.</p>
</li>
<li><p>You have 7/8+ years of experience in AI/ML, with at least 2+ years in a technical leadership role (e.g., Tech Lead, Engineering Manager, Staff Engineer or Solutions Architect) focused on AI products or enterprise solutions.</p>
</li>
<li><p>You have a proven track record of leading teams to deliver complex AI projects, from prototyping to production, in industries such as tech, finance, healthcare, or industrial automation.</p>
</li>
<li><p>You possess deep expertise in fine-tuning LLMs, advanced RAG, agentic systems, and deploying NLP applications at scale.</p>
</li>
<li><p>You are proficient in Python, PyTorch, and modern AI frameworks (e.g., LangChain, Hugging Face).</p>
</li>
</ul>
<p>Experience with cloud platforms (AWS, GCP, Azure) and MLOps tools is a plus.</p>
<ul>
<li><p>You have strong software engineering skills, including API design, backend/full-stack development, and system architecture.</p>
</li>
<li><p>You excel in technical communication, with the ability to articulate complex concepts to both technical and non-technical audiences, including executives and engineers.</p>
</li>
<li><p>You thrive in fast-paced, collaborative environments and are passionate about mentoring and growing technical talent.</p>
</li>
</ul>
<p>Ideally, you have:</p>
<ul>
<li><p>Contributed to open-source projects, particularly in the LLM or AI space.</p>
</li>
<li><p>Experience in customer-facing roles (e.g., Solutions Architect, Customer Engineer, or Technical Product Manager) with a focus on enterprise AI adoption.</p>
</li>
<li><p>A track record of driving technical strategy and influencing product direction based on customer needs and market opportunities.</p>
</li>
</ul>
<p>Why Join Us?</p>
<p>You’ll have the opportunity to shape the future of AI adoption in enterprises, work with a world-class team, and contribute to open-source projects that impact millions. If you’re excited about leading technical innovation and solving real-world challenges with AI, we’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, LangChain, Hugging Face, Machine Learning, Artificial Intelligence, Natural Language Processing, Fine-Tuning, RAG, Agentic Systems, Cloud Platforms, MLOps Tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides AI solutions for enterprises, with a comprehensive AI platform designed to meet enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/042d7b29-279b-48e2-a44b-c7bdc3180dab</Applyto>
      <Location>Montreal</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1b91ca47-183</externalid>
      <Title>Applied AI, Technical Lead - Forward Deployed AI Engineer</Title>
      <Description><![CDATA[<p>About the Job:</p>
<p>Mistral AI is seeking a Technical Lead, Applied AI to drive the technical strategy, execution, and delivery of complex AI solutions for our enterprise customers.</p>
<p>In this role, you will lead a project teams of Applied AI Engineers, ensuring the successful deployment of Mistral AI products and the development of high-impact, scalable AI use cases.</p>
<p>You will act as the primary technical point of contact for our most strategic customers, guiding them through the entire lifecycle,from pre-sales to post-implementation,while collaborating closely with research, product, and engineering teams to shape the future of our offerings.</p>
<p>As a Technical Lead, you will bridge the gap between cutting-edge AI research and real-world enterprise applications, ensuring our solutions are robust, scalable, and aligned with both customer needs and Mistral’s technological vision.</p>
<p>Responsibilities:</p>
<ul>
<li><p>Deliver as an IC the critical lines of codes of our complex projects, you’ll be hands-on and de-risk the critical parts of our complex projects.</p>
</li>
<li><p>Lead technical teams of Applied AI Engineers, providing mentorship, technical guidance, and best practices for deploying state-of-the-art GenAI applications across industries.</p>
</li>
<li><p>Lead technical discussions during pre-sales, translating customer requirements into actionable solutions and communicating Mistral’s technological advantages to diverse stakeholders.</p>
</li>
<li><p>Design and oversee the implementation of complex AI systems, including fine-tuning, RAG, agentic workflows, and custom LLM applications, ensuring alignment with Mistral’s product roadmap and open-source initiatives.</p>
</li>
<li><p>Drive innovation by identifying emerging trends in AI, evaluating new tools and methodologies, and championing best practices for fine-tuning, inference, and deployment.</p>
</li>
<li><p>Work closely with product managers, researchers, and engineers to ensure seamless integration of customer feedback into Mistral’s product development cycle.</p>
</li>
</ul>
<p>Requirements:</p>
<ul>
<li><p>You hold a PhD or Master’s degree in AI, Machine Learning, Computer Science, or a related field.</p>
</li>
<li><p>You have 7/8+ years of experience in AI/ML, with at least 2+ years in a technical leadership role (e.g., Tech Lead, Engineering Manager, or Solutions Architect) focused on AI products or enterprise solutions.</p>
</li>
<li><p>You have a proven track record of leading teams to deliver complex AI projects, from prototyping to production, in industries such as tech, finance, healthcare, or industrial automation.</p>
</li>
<li><p>You possess deep expertise in fine-tuning LLMs, advanced RAG, agentic systems, and deploying NLP applications at scale.</p>
</li>
<li><p>You are proficient in Python, PyTorch, and modern AI frameworks (e.g., LangChain, Hugging Face).</p>
</li>
<li><p>Experience with cloud platforms (AWS, GCP, Azure) and MLOps tools is a plus.</p>
</li>
<li><p>You have strong software engineering skills, including API design, backend/full-stack development, and system architecture.</p>
</li>
<li><p>You excel in technical communication, with the ability to articulate complex concepts to both technical and non-technical audiences, including executives and engineers.</p>
</li>
<li><p>You thrive in fast-paced, collaborative environments and are passionate about mentoring and growing technical talent.</p>
</li>
</ul>
<p>Ideal Qualifications:</p>
<ul>
<li><p>Contributed to open-source projects, particularly in the LLM or AI space.</p>
</li>
<li><p>Experience in customer-facing roles (e.g., Solutions Architect, Customer Engineer, or Technical Product Manager) with a focus on enterprise AI adoption.</p>
</li>
<li><p>A track record of driving technical strategy and influencing product direction based on customer needs and market opportunities.</p>
</li>
</ul>
<p>Why join us?</p>
<p>You’ll have the opportunity to shape the future of AI adoption in enterprises, work with a world-class team, and contribute to open-source projects that impact millions.</p>
<p>If you’re excited about leading technical innovation and solving real-world challenges with AI, we’d love to hear from you!</p>
<p>By applying, you agree to our Applicant Privacy Policy.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, LangChain, Hugging Face, Cloud platforms (AWS, GCP, Azure), MLOps tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops and provides high-performance, open-source AI models, products, and solutions for enterprise and personal use.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/ebfdc0da-13fd-4ae9-9861-bedb5ff493ea</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>6ea4460a-445</externalid>
      <Title>AI Scientist</Title>
      <Description><![CDATA[<p>About Mistral</p>
<p>At Mistral, we are on a mission to democratize AI, producing frontier intelligence for everyone. We develop models for the enterprise and for consumers, focusing on delivering systems which can really change the way in which businesses operate and which can integrate into our daily lives.</p>
<p>What will you do?</p>
<ul>
<li>Research and develop novel methods to push the frontier of large language models</li>
<li>Work across use cases (e.g reasoning, code, agents) and modalities (e.g text, image and speech)</li>
<li>Build tooling and infrastructure to allow training, evaluation and analysis of AI models at scale</li>
<li>Work cross-functionally with other scientists, engineers and product teams to ship AI systems which have a real-world impact</li>
</ul>
<p>About you</p>
<ul>
<li>You are a highly proficient software engineer in at least one programming language (Python or other, e.g. Rust, Go, Java)</li>
<li>You have hands-on experience with AI frameworks (e.g. PyTorch, JAX) or distributed systems (e.g. Ray, Kubernetes)</li>
<li>You have high engineering competence. This means being able to design complex software and make it usable in production</li>
<li>You are a self-starter, autonomous and a team player</li>
</ul>
<p>Now, it would be ideal if</p>
<ul>
<li>You have hands-on experience with training large transformer models in a distributed fashion</li>
<li>You are able to navigate the full MLOps stack, for instance, fine-tuning, evaluation and deployment</li>
<li>You have a strong publication record in a relevant scientific domain</li>
</ul>
<p>Benefits</p>
<p>France</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Food: Daily lunch vouchers</li>
<li>Sport: Monthly contribution to a Gympass subscription</li>
<li>Transportation: Monthly contribution to a mobility pass</li>
<li>Health: Full health insurance for you and your family</li>
<li>Parental: Generous parental leave policy</li>
</ul>
<p>UK</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Insurance</li>
<li>Transportation: Reimburse office parking charges, or 90GBP/month for public transport</li>
<li>Sport: 90GBP/month reimbursement for gym membership</li>
<li>Meal voucher: £200 monthly allowance for its meals</li>
<li>Pension plan: SmartPension (percentages are 5% Employee &amp; 3% Employer)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, JAX, Rust, Go, Java, Ray, Kubernetes, Training large transformer models in a distributed fashion, Navigating the full MLOps stack, Strong publication record in a relevant scientific domain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral develops models for the enterprise and for consumers, focusing on delivering systems which can integrate into our daily lives.</Employerdescription>
      <Employerwebsite>https://www.mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/675b7f06-a76b-4144-af0c-4dd3282ef489</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8c72b249-b04</externalid>
      <Title>AI Scientist</Title>
      <Description><![CDATA[<p>About the Role</p>
<p>We are seeking an AI Scientist to join our research team in Warsaw. As an AI Scientist, you will research and develop novel methods to push the frontier of large language models, work across use cases and modalities, and build tooling and infrastructure to allow training, evaluation, and analysis of AI models at scale.</p>
<p>Responsibilities</p>
<p>• Research and develop novel methods to push the frontier of large language models
• Work across use cases (e.g., reasoning, code, agents) and modalities (e.g., text, image, and speech)
• Build tooling and infrastructure to allow training, evaluation, and analysis of AI models at scale
• Work cross-functionally with other scientists, engineers, and product teams to ship AI systems that have a real-world impact</p>
<p>About You</p>
<p>• You are a highly proficient software engineer in at least one programming language (Python or other, e.g., Rust, Go, Java)
• You have hands-on experience with AI frameworks (e.g., PyTorch, JAX) or distributed systems (e.g., Ray, Kubernetes)
• You have high engineering competence. This means being able to design complex software and make it usable in production
• You are a self-starter, autonomous, and a team player</p>
<p>Benefits</p>
<p>• Competitive cash salary and equity
• Food: Monthly meal allowance
• Sport: Monthly contribution to a Gympass subscription
• Private pension plan</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, JAX, Rust, Go, Java, Ray, Kubernetes, Training large transformer models in a distributed fashion, Navigating the full MLOps stack, for instance, fine-tuning, evaluation, and deployment, Strong publication record in a relevant scientific domain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops and provides AI technology, including high-performance, open-source models, products, and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/4e498cbf-151e-483a-b3f7-76ff64a22041</Applyto>
      <Location>Warsaw</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>75be3667-194</externalid>
      <Title>AI Scientist - Audio</Title>
      <Description><![CDATA[<p>About Mistral</p>
<p>At Mistral, we are on a mission to democratize AI, producing frontier intelligence for everyone, developed in the open, and built by engineers all over the world.</p>
<p>We develop models for the enterprise and for consumers, focusing on delivering systems which can really change the way in which businesses operate and which can integrate into our daily lives. All while releasing frontier models open-source, for everyone to try and benefit.</p>
<p>What will you do?</p>
<ul>
<li>Research and develop novel methods to push the frontier of large language models</li>
<li>Work across use cases (e.g reasoning, code, agents) and modalities (e.g text, image and speech)</li>
<li>Build tooling and infrastructure to allow training, evaluation and analysis of AI models at scale</li>
<li>Work cross-functionally with other scientists, engineers and product teams to ship AI systems which have a real-world impact</li>
</ul>
<p>About you</p>
<ul>
<li>An expert in speech input/output methodologies (specific to audio)</li>
<li>Highly proficient software engineer in at least one programming language (Python or other, e.g. Rust, Go, Java)</li>
<li>Hands-on experience with AI frameworks (e.g. PyTorch, JAX) or distributed systems (e.g. Ray, Kubernetes)</li>
<li>High engineering competence. This means being able to design complex software and make it usable in production</li>
<li>Self-starter, autonomous and a team player</li>
</ul>
<p>Now, it would be ideal if</p>
<ul>
<li>You have experience working with large-scale speech-language models</li>
<li>You have hands-on experience with training large transformer models in a distributed fashion</li>
<li>You can navigate the full MLOps stack, for instance, fine-tuning, evaluation and deployment</li>
<li>You have a strong publication record in a relevant scientific domain</li>
</ul>
<p>Benefits</p>
<p>France</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Food: Daily lunch vouchers</li>
<li>Sport: Monthly contribution to a Gympass subscription</li>
<li>Transportation: Monthly contribution to a mobility pass</li>
<li>Health: Full health insurance for you and your family</li>
<li>Parental: Generous parental leave policy</li>
</ul>
<p>UK</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Insurance</li>
<li>Transportation: Reimburse office parking charges, or 90GBP/month for public transport</li>
<li>Sport: 90GBP/month reimbursement for gym membership</li>
<li>Meal voucher: £200 monthly allowance for its meals</li>
<li>Pension plan: SmartPension (percentages are 5% Employee &amp; 3% Employer)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>speech input/output methodologies, Python, PyTorch, JAX, Rust, Go, Java, distributed systems, Ray, Kubernetes, large-scale speech-language models, training large transformer models in a distributed fashion, MLOps stack, fine-tuning, evaluation, deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral develops models for the enterprise and for consumers, focusing on delivering systems which can integrate into our daily lives.</Employerdescription>
      <Employerwebsite>https://www.mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/94173e13-3050-4044-862a-e8dfc2deda5e</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>fe6b2de2-36b</externalid>
      <Title>AI Scientist</Title>
      <Description><![CDATA[<p>About Mistral</p>
<p>At Mistral, we are on a mission to democratize AI, producing frontier intelligence for everyone, developed in the open, and built by engineers all over the world.</p>
<p>We develop models for the enterprise and for consumers, focusing on delivering systems which can really change the way in which businesses operate and which can integrate into our daily lives. All while releasing frontier models open-source, for everyone to try and benefit.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Research and develop novel methods to push the frontier of large language models</li>
<li>Work across use cases (e.g reasoning, code, agents) and modalities (e.g text, image and speech)</li>
<li>Build tooling and infrastructure to allow training, evaluation and analysis of AI models at scale</li>
<li>Work cross-functionally with other scientists, engineers and product teams to ship AI systems which have a real-world impact</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>You are a highly proficient software engineer in at least one programming language (Python or other, e.g. Rust, Go, Java)</li>
<li>You have hands-on experience with AI frameworks (e.g. PyTorch, JAX) or distributed systems (e.g. Ray, Kubernetes)</li>
<li>You have high engineering competence. This means being able to design complex software and make it usable in production</li>
<li>You are a self-starter, autonomous and a team player</li>
</ul>
<p>Now, it would be ideal if</p>
<ul>
<li>You have hands-on experience with training large transformer models in a distributed fashion</li>
<li>You are able to navigate the full MLOps stack, for instance, fine-tuning, evaluation and deployment</li>
<li>You have a strong publication record in a relevant scientific domain</li>
<li>Audio/Speech experience - audio input/out, NLP</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Competitive salary and bonus structure</li>
<li>Generous Equity</li>
<li>Health: Competitive Healthcare program (Medical Provider: Blueshield of California 100% coverage for employee, 75% for dependents)</li>
<li>Pension: 401K (6% matching)</li>
<li>PTO: 18 days</li>
<li>Transportation: Reimburse office parking charges, or $120/month for public transport</li>
<li>Coaching: we offer Betterup coaching on a voluntary basis</li>
<li>Sport: $120/month reimbursement for gym membership</li>
<li>Meal stipend: $400 monthly allowance for meals (solution might evolve as we grow bigger)</li>
<li>Visa sponsorship</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, JAX, Rust, Go, Java, Ray, Kubernetes, large language models, distributed systems, MLOps, training large transformer models, fine-tuning, evaluation and deployment, audio input/out, NLP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral develops models for the enterprise and for consumers, focusing on delivering systems which can integrate into our daily lives.</Employerdescription>
      <Employerwebsite>https://www.mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/7b20d2c8-d5a7-4efd-a13e-05d920ec5985</Applyto>
      <Location>Palo Alto</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>cab4499b-7c8</externalid>
      <Title>Senior Software Engineer, Scientific Computing</Title>
      <Description><![CDATA[<p>At KoBold we believe that a modern scientific computing stack will enable systematic mineral exploration and materially improve our rate of mineral discovery. This role is a key ingredient to this strategy. As a member of our scientific computing team, you will apply software engineering and machine learning to remote-sensing, drillhole, imaging, geophysics and other mineral exploration data in order to build scalable ML systems to help make high-speed, high-quality decisions for our mineral exploration projects. Collaborating with our exceptional team of data scientists and geologists, you will tackle complex scientific problems head-on and collectively pave the way for discoveries of vital energy transition metals like lithium, copper, nickel, and cobalt. Together we can shape the future of mineral exploration and contribute to building a sustainable world.</p>
<p>Responsibilities:</p>
<ul>
<li>Architect, implement, and maintain foundational scientific computing libraries that will be used in KoBold’s mineral exploration analyses.</li>
<li>Build tooling to increase the velocity of our machine learning progress, including enabling rapid prototyping in Jupyter notebooks; build experimentation, evaluation, and simulation frameworks; turning successful R&amp;D into robust, scalable ML pipelines; and organizing models and their outputs for repeatability and discoverability.</li>
<li>In collaboration with data scientists, build models to make statistically valid predictions about the locations of economic concentrations of ore metals within the Earth’s crust.</li>
<li>Apply–and coach team members to use–engineering best practices such as writing robust, testable and composable code</li>
<li>Collaborate with data scientists, geoscientists and engineers to invent the modern scientific computing stack for mineral exploration</li>
<li>Occasional travel to exploration sites around the world to observe the impact of scientific computing on KoBold’s exploration products and design new technologies to further discovery. Travel is approximately twice per year depending on project needs.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>At least 5 years of experience as a software engineer, data scientist or ML engineer, though most great candidates will have closer to 10.</li>
<li>Track record of building production quality data processing solutions or tooling that have delivered business value</li>
<li>Proficiency with foundational concepts of ML, including statistical, traditional and deep-learning approaches</li>
<li>Proficiency in Python, ideally including array-based packages such as xarray and numpy</li>
<li>Deep experience with measured scientific data</li>
<li>Experience in visualizing scientific data for domain experts</li>
<li>Experience in MLops and in the making of robust ML systems</li>
<li>Drive to increase the velocity and effectiveness of our data scientists in both experimental and production workflows</li>
<li>Capacity to dive deep on novel challenging problems in applying ML to mineral exploration, including understanding a complex domain of geology and mineral exploration practices as well as working with limited, disparate and noisy data sources</li>
<li>Collaborative attitude to work with stakeholders with different backgrounds (data scientists, geoscientists, software engineers, operations)</li>
</ul>
<p>Work practices and motivation:</p>
<ul>
<li>Ability to take ownership and responsibility of large projects.</li>
<li>Intellectual curiosity and eagerness to learn about all aspects of mineral exploration, particularly in the geology domain. Open to working directly with geologists in the field. Enjoys constantly learning such that you are driving insights and innovations.</li>
<li>Ability to explain technical problems to and collaborate on solutions with domain experts who aren’t software developers. A strong communicator who enjoys working with colleagues across the company.</li>
<li>Excitement about joining a fast-growing early-stage company, comfort with a dynamic work environment, and eagerness to take on a range of responsibilities.</li>
<li>Keen not just to build cool technology, but to figure out what technical product to build to best achieve the business objectives of the company.</li>
<li>Ability to independently prioritize multiple tasks effectively.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$170,000 - $215,000</Salaryrange>
      <Skills>Python, Machine Learning, Scientific Computing, Data Science, Geophysics, Remote Sensing, Drillhole Imaging, Jupyter Notebooks, MLops, Robust ML Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>KoBold Metals</Employername>
      <Employerlogo>https://logos.yubhub.co/koboldmetals.com.png</Employerlogo>
      <Employerdescription>KoBold Metals is a privately held mineral exploration company that uses AI models and novel sensors to guide exploration decisions. It has become the largest independent mineral exploration company and the largest exploration technology developer.</Employerdescription>
      <Employerwebsite>https://www.koboldmetals.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/koboldmetals/jobs/4624038005</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2e8a2997-260</externalid>
      <Title>Senior Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We are open to hiring at multiple levels for this role, depending on experience, impact, and demonstrated ownership. While this role is level-agnostic, it is best suited for engineers with experience owning and working in highly ambiguous problem spaces.</p>
<p>About the company:
The mining industry has steadily become worse at finding new ore deposits, requiring &gt;10X more capital to make discoveries compared to 30 years ago. KoBold Metals builds AI models for mineral exploration and deploys those models,alongside our novel sensors,to guide decisions on KoBold-owned-and-operated exploration programs.</p>
<p>About The Role:
In this role, you will partner with exploration and engineering teams to build reliable, scalable infrastructure that makes it easier to turn data and models into real-world exploration insights. You will improve observability, streamline MLOps workflows, and maintain shared tools like JupyterHub that enable faster experimentation and collaboration. Your work will help create a solid foundation for scientists and engineers to focus on discovery instead of infrastructure.</p>
<p>Responsibilities</p>
<ul>
<li>Design, build, and operate compute infrastructure that is both scalable and reliable to support critical services.</li>
<li>Work closely with engineering teams to embed observability, reliability, and security throughout the software development process.</li>
<li>Create and maintain automation for monitoring, deployments, and incident response to keep operations efficient and predictable.</li>
<li>Lead or support capacity planning, performance reviews, and system tuning to ensure stable and efficient systems.</li>
<li>Join the on-call rotation and take part in incident response, troubleshooting, and resolution.</li>
<li>Develop and refine monitoring and alerting to catch issues early and reduce downtime.</li>
<li>Establish and maintain disaster recovery and business continuity practices that protect the organization against failures.</li>
<li>Regularly review and improve our tools and processes to strengthen system visibility and reliability.</li>
<li>Investigate points of fragility in distributed systems and understand how complex systems behave under stress in order to improve resilience.</li>
<li>Continually learn about mineral exploration through reading, discussions with exploration team members, periodic rotation on an exploration team and time in the field with geologists</li>
</ul>
<p>Qualifications</p>
<ul>
<li>5+ years of experience as an Infrastructure Engineer, Site Reliability Engineer or in a similar role</li>
<li>Strong scripting and programming skills (Python, Go, Java or JavaScript/ Node.js )</li>
<li>Experience with IaC tools like Terraform and container orchestration tools like Kubernetes and Docker</li>
<li>Experience with cloud platforms such as AWS</li>
<li>Experience operating or administering JupyterHub in a multi-user environment</li>
<li>Understanding of MLOps workflows, including model training, deployment, and related tooling</li>
<li>Excellent communication &amp; collaboration skills and a continuous improvement mindset</li>
<li>Proven ability to troubleshoot complex issues and implement effective solutions</li>
<li>Proven ability to thrive in dynamic and evolving environments, effectively navigating uncertainty and incomplete information.</li>
<li>Proven ability to grow expertise, influence &amp; educate others</li>
<li>Comfortable making informed decisions with limited data, adapting quickly to new circumstances, and maintaining focus on strategic objectives while driving clarity for the team.</li>
<li>Intellectual curiosity and eagerness to learn about all aspects of mineral exploration, particularly in the geology domain. Enjoys constantly learning such that you are driving insights through using our tools in exploration and willing to work directly with geologists in the field.</li>
<li>Ability to explain technical problems to and collaborate on solutions with domain experts who are not infrastructure engineers. A strong communicator who enjoys working with colleagues across the company.</li>
<li>Excitement about joining a fast-growing early-stage company, comfort with a dynamic work environment, and eagerness to take on an evolving range of responsibilities.</li>
<li>Keen not just to build cool technology, but to figure out what technical product to build to best achieve the business objectives of the company.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$170,000 - $230,000</Salaryrange>
      <Skills>scripting, programming, IaC, container orchestration, cloud platforms, MLOps workflows, observability, reliability, security, automation, monitoring, deployments, incident response, capacity planning, performance reviews, system tuning, disaster recovery, business continuity, tools, processes, distributed systems, complex systems, resilience, mineral exploration, geology</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>KoBold Metals</Employername>
      <Employerlogo>https://logos.yubhub.co/koboldmetals.com.png</Employerlogo>
      <Employerdescription>KoBold Metals is a privately held mineral exploration company and technology developer, with a portfolio of over 60 projects.</Employerdescription>
      <Employerwebsite>https://koboldmetals.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/koboldmetals/jobs/4002126005</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2bc207d0-89b</externalid>
      <Title>Senior Machine Learning Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Machine Learning Research Engineer to join the Machine Learning Science (MLS) team, within the Computational Science department. The ideal candidate has a strong knowledge in designing and building deep learning (DL) pipelines, and expertise in creating reliable, scalable artificial intelligence/machine learning (AI/ML) systems in a cloud environment.</p>
<p>The MLS team at Freenome develops DL models using massive-scale genomic data that presents significant challenges for current training paradigms. The Senior Machine Learning Research Engineer will primarily be responsible for developing and deploying the infrastructure needed to support development of such DL models: enabling distributed DL pipelines, optimising hardware utilisation for efficient training, and performing model optimisations.</p>
<p>As part of an interdisciplinary R&amp;D team, they will work in close collaboration with machine learning scientists, computational biologists and software engineers to accelerate the development of state-of-the-art ML/AI models and help Freenome achieve its mission.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Implementing and refining DL pipelines on distributed computing platforms to enhance the speed and efficiency of DL operations, including model training, data handling, model management, and inference.</li>
<li>Collaborating closely with ML scientists and software engineers to understand current challenges and requirements and ensure that the DL model development pipelines created are perfectly aligned with scientific goals and operational needs.</li>
<li>Continuously monitoring, evaluating, and optimising DL model training pipelines for performance and scalability.</li>
<li>Staying up to date with the latest advancements in AI, ML, and related technologies, and quickly learning and adapting new tools and frameworks, if necessary.</li>
<li>Developing and maintaining robust and reproducible DL pipelines that guarantee that DL pipelines can be reliably executed, maintaining consistency and accuracy of results.</li>
<li>Driving performance improvements across our stack through profiling, optimisation, and benchmarking. Implementing efficient caching solutions and debugging distributed systems to accelerate both training and evaluation pipelines.</li>
<li>Acting as a bridge facilitating communication between the engineering and scientific teams, documenting and sharing best practices to foster a culture of learning and continuous improvement.</li>
</ul>
<p>Must-haves include:</p>
<ul>
<li>MS or equivalent experience in a relevant, quantitative field such as Computer Science, Statistics, Mathematics, Software Engineering, with an emphasis on AI/ML theory and/or practical development.</li>
<li>5+ years of post-MS industry experience working on developing AI/ML software engineering pipelines.</li>
<li>Proficiency in a general-purpose programming language: Python (preferred), Java, Julia, C, C++, etc.</li>
<li>Strong knowledge of ML and DL fundamentals and hands-on experience with machine learning frameworks such as PyTorch, TensorFlow, Jax or Scikit-learn.</li>
<li>In-depth knowledge of scalable and distributed computing platforms that support complex model training (such as Ray or DeepSpeed) and their integration with ML developer tools like TensorBoard, Wandb, or MLflow.</li>
<li>Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) and how to deploy and manage AI/ML models and pipelines in a cloud environment.</li>
<li>Understanding of containerisation technologies (e.g., Docker) and computing resource orchestration tools (e.g., Kubernetes) for deploying scalable ML/AI solutions.</li>
<li>Proven track record of developing and optimising workflows for training DL models, large language models (LLMs), or similar for problems with high data complexity and volume.</li>
<li>Experience managing large datasets, including data storage (such as HDFS or Parquet on S3), retrieval, and efficient data processing techniques (via libraries and executors such as PyArrow and Spark).</li>
<li>Proficiency in version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) practices to maintain code quality and automate development workflows.</li>
<li>Expertise in building and launching large-scale ML frameworks in a scientific environment that supports the needs of a research team.</li>
<li>Excellent ability to work effectively with cross-functional teams and communicate across disciplines.</li>
</ul>
<p>Nice-to-haves include:</p>
<ul>
<li>Experience working with large-scale genomics or biological datasets.</li>
<li>Experience managing multimodal datasets, such as combinations of sequence, text, image, and other data.</li>
<li>Experience GPU/Accelerator programming and kernel development (such as CUDA, Triton or XLA).</li>
<li>Experience with infrastructure-as-code and configuration management.</li>
<li>Experience cultivating MLOps and ML infrastructure best practices, especially around reliability, provisioning and monitoring.</li>
<li>Strong track record of contributions to relevant DL projects, e.g. on github.</li>
</ul>
<p>The US target range of our base salary for new hires is $161,925 - $227,325. You will also be eligible to receive equity, cash bonuses, and a full range of medical, financial, and other benefits depending on the position offered.</p>
<p>Freenome is proud to be an equal-opportunity employer, and we value diversity. Freenome does not discriminate on the basis of race, colour, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$161,925 - $227,325</Salaryrange>
      <Skills>Python, Java, Julia, C, C++, PyTorch, TensorFlow, Jax, Scikit-learn, Ray, DeepSpeed, TensorBoard, Wandb, MLflow, AWS, Google Cloud, Azure, Docker, Kubernetes, Git, Continuous Integration/Continuous Deployment, Large-scale genomics or biological datasets, Multimodal datasets, GPU/Accelerator programming and kernel development, Infrastructure-as-code and configuration management, MLOps and ML infrastructure best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Freenome</Employername>
      <Employerlogo>https://logos.yubhub.co/freenome.com.png</Employerlogo>
      <Employerdescription>Freenome is a quantitative biology company that aims to reduce cancer mortality via accessible early detection.</Employerdescription>
      <Employerwebsite>https://freenome.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/freenome/jobs/8013673002</Applyto>
      <Location>Brisbane, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e8d98a5b-1ea</externalid>
      <Title>AI &amp; ML Engineer</Title>
      <Description><![CDATA[<p>About Charlotte Tilbury Beauty</p>
<p>Founded by British makeup artist and beauty entrepreneur Charlotte Tilbury MBE in 2013, Charlotte Tilbury Beauty has revolutionised the face of the global beauty industry.</p>
<p>The AI &amp; ML Engineering team accelerates the adoption of AI across the business, championing innovation while ensuring our machine learning products are robust, scalable, and cost-efficient.</p>
<p>Responsibilities:</p>
<p>Partner with stakeholders to scope problems and identify the right solution - whether leveraging existing AI tools or building custom workflows &amp; solutions.</p>
<p>Design and implement agentic systems using techniques spanning RAG, grounding, prompt engineering, and orchestration on a GCP-first stack.</p>
<p>Build and maintain production ML pipelines and services for non-GenAI use cases (e.g. recommender systems, customer segmentation models, marketing optimisation modules, leveraging supervised, unsupervised and/or econometric modelling approaches).</p>
<p>Develop APIs and microservices for AI/ML solutions, ensuring security, scalability, and observability.</p>
<p>Implement CI/CD for ML services, writing infrastructure as code, and monitoring for model/data drift and performance.</p>
<p>Establish robust guardrails for safe AI usage, including prompt security, practical evaluation frameworks, and compliance with privacy regulations.</p>
<p>Drive and evangelize best practices, reusable templates, and documentation to scale AI/ML delivery across the business.</p>
<p>Collaborate with data engineers, data scientists, front &amp; back-end engineers, product managers, legal &amp; infosec colleagues to deliver impactful solutions end-to-end.</p>
<p>Who you will work with</p>
<p>The AI &amp; ML Engineer Lead and the wider data team.</p>
<p>About you</p>
<p>The role requires a blend of technical depth and product sense, including:</p>
<p>Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.</p>
<p>Strong Python engineering skills (FastAPI, testing, typing) and experience with cloud-native development (GCP preferred).</p>
<p>Hands-on experience with GCP Vertex AI (model endpoints, pipelines, embeddings, vector search) or equivalent cloud-native ML platforms (e.g. AWS SageMaker, Azure ML) and agent orchestration frameworks such as LangChain and LangGraph.</p>
<p>Solid understanding of MLOps - CI/CD, IaC (Terraform), experiment tracking, model registry, and monitoring.</p>
<p>Proven experience deploying and operating ML systems in production (batch and real-time).</p>
<p>Familiarity with RAG architectures, prompt engineering, and evaluation techniques.</p>
<p>Strong grasp of security, privacy, and governance principles (IAM, secrets, PII handling).</p>
<p>Excellent communication skills and ability to work with non-technical stakeholders.</p>
<p>In addition to the above, we would LOVE if you have:</p>
<p>Experience with vector databases and retrieval strategies.</p>
<p>Knowledge of recommender systems and ranking models.</p>
<p>Familiarity with LLM evaluation tools (e.g., RAGAS, TruLens, LangSmith, Arize).</p>
<p>Exposure to feature stores, data lineage, and observability stacks.</p>
<p>Experience in e-commerce or retail environments.</p>
<p>Demonstrable ability to weigh up build/build/configure decisions in the LLM space.</p>
<p>Why join us?</p>
<p>Be a part of this values driven, high growth, magical journey with an ultimate vision to empower everyone, everywhere to be the best version of themselves.</p>
<p>We’re a hybrid model with flexibility, allowing you to work how best suits you.</p>
<p>25 days holiday (plus bank holidays) with an additional day to celebrate your birthday.</p>
<p>Inclusive parental leave policy that supports all parents and carers throughout their parenting and caring journey.</p>
<p>Financial security and planning with our pension and life assurance for all.</p>
<p>Wellness and social benefits including Medicash, Employee Assist Programs and regular social connects with colleagues.</p>
<p>Bring your furry friend to work with you on our allocated dog friendly days and spaces.</p>
<p>And not to forget our generous product discount and gifting!</p>
<p>At Charlotte Tilbury Beauty, our mission is to empower everybody in the world to be the most beautiful version of themselves.</p>
<p>We celebrate and support this by encouraging and hiring people with diverse backgrounds, cultures, voices, beliefs, and perspectives into our growing global workforce.</p>
<p>By doing so, we better serve our communities, customers, employees - and the candidates that take part in our recruitment process.</p>
<p>If you want to learn more about life at Charlotte Tilbury Beauty please follow our LinkedIn page!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, GCP, Vertex AI, LangChain, LangGraph, MLOps, CI/CD, IaC, Experiment tracking, Model registry, Monitoring, Vector databases, Recommender systems, Ranking models, LLM evaluation tools, Feature stores, Data lineage, Observability stacks, E-commerce, Retail environments</Skills>
      <Category>Engineering</Category>
      <Industry>Beauty</Industry>
      <Employername>Charlotte Tilbury Beauty</Employername>
      <Employerlogo>https://logos.yubhub.co/charlottetilbury.com.png</Employerlogo>
      <Employerdescription>A global beauty company founded by British makeup artist and beauty entrepreneur Charlotte Tilbury MBE in 2013, with over 2,300 employees globally.</Employerdescription>
      <Employerwebsite>https://www.charlottetilbury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/243770B17B</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>d4dabbbc-b6f</externalid>
      <Title>Principal Data Scientist</Title>
      <Description><![CDATA[<p>Are you ready to join a world-class team and make a significant impact on the gaming industry? At Aristocrat, we aim to bring happiness to life through the power of play. We seek a Principal Data Scientist to help us reach our ambitious goals. You will have a vital role in enhancing gameplay, boosting player engagement, and improving business outcomes with your advanced data expertise. This opportunity allows you to work on innovative projects, collaborate with diverse teams, and guide critical initiatives that will develop the future of our leading games.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead high-impact data science initiatives end-to-end, including problem framing, methodology selection, experiment development, implementation partnership, and impact measurement.</li>
<li>Build and deliver machine learning and reinforcement learning solutions to improve player engagement, retention, monetization, and operational outcomes.</li>
<li>Lead the modeling framework for complex systems, guaranteeing comprehensive evaluation and monitoring of causal inference, uplift modeling, sequential decisioning, bandits/reinforcement learning, and forecasting.</li>
<li>Partner with game teams to define success metrics, guardrails, and decision frameworks, translating analytical results into actionable product and operational actions.</li>
<li>Define and uphold engineering standards and guidelines for model development, including validation, uncertainty, reproducibility, and bias/quality checks.</li>
<li>Drive scalable experimentation with A/B and Multi-armed bandit testing frameworks, power analysis, variance reduction, and online-offline alignment.</li>
<li>Work together with Data Engineering, MLOps, and Game Tech teams to guarantee dependable data foundations, feature accessibility, and model deployment pathways.</li>
<li>Build internal data products to improve the speed and quality of decision-making, such as AB-test calculators, decision tools, and automated insights.</li>
<li>Provide technical leadership through building and code reviews, mentoring, and coaching, improving the standard of data science craft across the organization.</li>
<li>Serve as a reliable collaborator throughout the organization, promoting data-informed decision-making and enabling business units to embrace data products.</li>
<li>Translate complex analytical insights into actionable recommendations, presenting them to senior leadership to inform critical business decisions and encourage collaborators.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>PhD or MSc in Data Science, Computer Science, Statistics, Physics, Mathematics, or a related quantitative field, 5+ years of professional data science experience, Demonstrated proficiency in clustering, predictive modeling, reinforcement learning, and Bayesian statistics, Hands-on experience in software engineering, MLOps, and deploying machine learning models at scale, Proficiency in SQL, Python, and familiarity with big data technologies (e.g., Kafka, Spark) and/or cloud platforms (e.g., GCP, AWS, or Azure), Industry knowledge: Experience in gaming or digital entertainment is a strong plus</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Aristocrat</Employername>
      <Employerlogo>https://logos.yubhub.co/aristocrat.com.png</Employerlogo>
      <Employerdescription>Aristocrat is a global gaming company with a portfolio of regulated land-based gaming, social casino, and regulated online real money gaming products.</Employerdescription>
      <Employerwebsite>https://www.aristocrat.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://aristocrat.wd3.myworkdayjobs.com/en-US/AristocratExternalCareersSite/job/London-United-Kingdom/Principal-Data-Scientist_R0020855</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>1338e7d1-ad8</externalid>
      <Title>Cloud Machine Learning Engineer</Title>
      <Description><![CDATA[<p>At Hugging Face, we&#39;re on a journey to democratize good AI. We are building the fastest growing platform for AI builders. We are looking for a Cloud Machine Learning engineer responsible to help build machine learning solutions used by millions leveraging cloud technologies.</p>
<p>You will work on integrating Hugging Face&#39;s open-source libraries like Transformers and Diffusers, with major cloud platforms or managed SaaS solutions. This role involves bridging and integrating models with different cloud providers, ensuring the models meet expected performance, designing and developing easy-to-use, secure, and robust developer experiences and APIs for our users, writing technical documentation, examples and notebooks to demonstrate new features, and sharing and advocating your work and the results with the community.</p>
<p>The ideal candidate will have deep experience building with Hugging Face Technologies, including Transformers, Diffusers, Accelerate, PEFT, Datasets, expertise in Deep Learning Framework, preferably PyTorch, optionally XLA understanding, strong knowledge of cloud platforms like AWS and services like Amazon SageMaker, EC2, S3, CloudWatch and/or Azure and GCP equivalents, experience in building MLOps pipelines for containerizing models and solutions with Docker, familiarity with Typescript, Rust, and MongoDB, Kubernetes are helpful, ability to write clear documentation, examples and definition and work across the full product development lifecycle, and bonus experience with Svelte &amp; TailwindCSS.</p>
<p>We are actively working to build a culture that values diversity, equity, and inclusivity. We are intentionally building a workplace where people feel respected and supported—regardless of who you are or where you come from. We believe this is foundational to building a great company and community.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Deep experience building with Hugging Face Technologies, including Transformers, Diffusers, Accelerate, PEFT, Datasets, Expertise in Deep Learning Framework, preferably PyTorch, optionally XLA understanding, Strong knowledge of cloud platforms like AWS and services like Amazon SageMaker, EC2, S3, CloudWatch and/or Azure and GCP equivalents, Experience in building MLOps pipelines for containerizing models and solutions with Docker, Familiarity with Typescript, Rust, and MongoDB, Kubernetes are helpful, Bonus experience with Svelte &amp; TailwindCSS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hugging Face</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Hugging Face is a platform for AI builders with over 11 million users who collectively shared over 2M models, 700k datasets &amp; 600k apps.</Employerdescription>
      <Employerwebsite>https://huggingface.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/A3879724CD</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>af4253f8-57e</externalid>
      <Title>Cloud Machine Learning Engineer - EMEA remote</Title>
      <Description><![CDATA[<p>At Hugging Face, we&#39;re on a journey to democratize good AI. We are building the fastest growing platform for AI builders with over 11 million users who collectively shared over 2M models, 700k datasets &amp; 600k apps. Our open-source libraries have more than 600k+ stars on Github. Hugging Face has become the most popular, community-driven project for training, sharing, and deploying the most advanced machine learning models.</p>
<p>We are looking for a Cloud Machine Learning engineer responsible to help build machine learning solutions used by millions leveraging cloud technologies. You will work on integrating Hugging Face&#39;s open-source libraries like Transformers and Diffusers, with major cloud platforms or managed SaaS solutions.</p>
<p>Responsibilities:</p>
<ul>
<li>Bridging and integrating 🤗 transformers/diffusers models with a different Cloud provider.</li>
<li>Ensuring the above models meet the expected performance</li>
<li>Designing &amp; Developing easy-to-use, secure, and robust Developer Experiences &amp; APIs for our users.</li>
<li>Write technical documentation, examples and notebooks to demonstrate new features</li>
<li>Sharing &amp; Advocating your work and the results with the community.</li>
</ul>
<p>About You
You&#39;ll enjoy working on this team if you have experience with and interest in deploying machine learning systems to production and build great developer experiences. The ideal candidate will have skills including:</p>
<ul>
<li>Deep experience building with Hugging Face Technologies, including Transformers, Diffusers, Accelerate, PEFT, Datasets</li>
<li>Expertise in Deep Learning Framework, preferably PyTorch, optionally XLA understanding</li>
<li>Strong knowledge of cloud platforms like AWS and services like Amazon SageMaker, EC2, S3, CloudWatch and/or Azure and GCP equivalents.</li>
<li>Experience in building MLOps pipelines for containerizing models and solutions with Docker</li>
<li>Familiarity with Typescript, Rust, and MongoDB, Kubernetes are helpful</li>
<li>Ability to write clear documentation, examples and definition and work across the full product development lifecycle</li>
<li>Bonus: Experience with Svelte &amp; TailwindCSS</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Deep experience building with Hugging Face Technologies, including Transformers, Diffusers, Accelerate, PEFT, Datasets, Expertise in Deep Learning Framework, preferably PyTorch, optionally XLA understanding, Strong knowledge of cloud platforms like AWS and services like Amazon SageMaker, EC2, S3, CloudWatch and/or Azure and GCP equivalents., Experience in building MLOps pipelines for containerizing models and solutions with Docker, Familiarity with Typescript, Rust, and MongoDB, Kubernetes are helpful, Svelte &amp; TailwindCSS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hugging Face</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Hugging Face is a platform for AI builders with over 11 million users who collectively shared over 2M models, 700k datasets &amp; 600k apps.</Employerdescription>
      <Employerwebsite>https://huggingface.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/0CE9E806CC</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>66a96d8c-777</externalid>
      <Title>AI Scientist</Title>
      <Description><![CDATA[<p>At Mistral, we are on a mission to democratize AI, producing frontier intelligence for everyone, developed in the open, and built by engineers all over the world.</p>
<p>We are hiring experts in the training of large language models and distributed systems. Join us to be part of a pioneering company shaping the future of AI.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Research and develop novel methods to push the frontier of large language models</li>
<li>Work across use cases (e.g reasoning, code, agents) and modalities (e.g text, image and speech)</li>
<li>Build tooling and infrastructure to allow training, evaluation and analysis of AI models at scale</li>
<li>Work cross-functionally with other scientists, engineers and product teams to ship AI systems which have a real-world impact</li>
</ul>
<p><strong>About You:</strong></p>
<ul>
<li>You are a highly proficient software engineer in at least one programming language (Python or other, e.g. Rust, Go, Java)</li>
<li>You have hands-on experience with AI frameworks (e.g. PyTorch, JAX) or distributed systems (e.g. Ray, Kubernetes)</li>
<li>You have high engineering competence. This means being able to design complex software and make it usable in production</li>
<li>You are a self-starter, autonomous and a team player</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>You have hands-on experience with training large transformer models in a distributed fashion</li>
<li>You are able to navigate the full MLOps stack, for instance, fine-tuning, evaluation and deployment</li>
<li>You have a strong publication record in a relevant scientific domain</li>
<li>Audio/Speech experience - audio input/out, NLP</li>
</ul>
<p><strong>What We Offer:</strong></p>
<ul>
<li>Competitive salary and bonus structure</li>
<li>Generous Equity</li>
<li>Health: Competitive Healthcare program (Medical Provider: Blueshield of California 100% coverage for employee, 75% for dependents)</li>
<li>Pension: 401K (6% matching)</li>
<li>PTO: 18 days</li>
<li>Transportation: Reimburse office parking charges, or $120/month for public transport</li>
<li>Coaching: we offer Betterup coaching on a voluntary basis</li>
<li>Sport: $120/month reimbursement for gym membership</li>
<li>Meal stipend: $400 monthly allowance for meals (solution might evolve as we grow bigger)</li>
<li>Visa sponsorship</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, JAX, Rust, Go, Java, distributed systems, Ray, Kubernetes, large language models, transformer models, audio input/out, NLP, MLOps, fine-tuning, evaluation, deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral develops models for the enterprise and for consumers, focusing on delivering systems which can integrate into our daily lives.</Employerdescription>
      <Employerwebsite>https://www.mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/7b20d2c8-d5a7-4efd-a13e-05d920ec5985</Applyto>
      <Location>Palo Alto</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>89406e8e-f38</externalid>
      <Title>Machine Learning Engineer, Open-Source Software</Title>
      <Description><![CDATA[<p>You will be in charge of open-sourcing state-of-the-art models, whilst maintaining and improving Mistral’s publicly available libraries. Your work is critical in helping turn research breakthroughs into tangible solutions and improve Mistral&#39;s open-source ecosystem.</p>
<p>About the Open Source Software team
Our OSS team is embedded in our Science team and works very closely with various engineering and marketing teams. All OSS team members can fluidly move on the production / research spectrum depending on where the needs are or where their interests lie</p>
<p>Responsibilities
• Releasing our models to open-source platforms and libraries, e.g., vLLM, GitHub, Hugging Face
• Maintaining Mistral’s open-source libraries (mistral-common, mistral-finetune, mistral-inference)
• Create and maintain tooling and services: both internal facing (internal research) and external facing (open-source libraries)
• Implement and optimize open-source and internal libraries for performance and accuracy, ensuring production readiness and employing cutting-edge technology and innovative approaches
• Collaborate with the open-source community (PyTorch, vLLM, Hugging Face)</p>
<p>About you
• Master’s degree in Computer Science, Machine Learning, Data Science, or a related field
• Experience contributing to popular open-source libraries such as PyTorch, Tensorflow, JAX, vLLM, Transformers, Llama.cpp, ...
• Passion for contributing to the open-source software ecosystem
• Expert programming skills in Python, PyTorch, MLOps
• Adaptable, proactive, and autonomous
• Attention to detail and a drive to go the last mile to build almost perfect tools
• Deep understanding of machine learning approaches, especially LLMs and algorithms
• Low-ego, collaborative and have a real team player mindset</p>
<p>Now, it would be ideal if you have:
• Experience with training and fine-tuning large language models (e.g., distillation, supervised fine-tuning, policy optimization)
• Experience working with Slurm
• Worked with research teams before
• Experience as a core-maintainer of a popular ML open-source library</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, MLOps, Machine Learning, Large Language Models, Slurm, Open-source libraries, vLLM, GitHub, Hugging Face, PyTorch, Tensorflow, JAX, Transformers, Llama.cpp</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, optimized, open-source and cutting-edge AI models, products and solutions for enterprise use Gebased on-premises or in cloud environments.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/ef4c26fc-3fdb-4dd2-a64e-95264ee769dd</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>cd22be7a-22a</externalid>
      <Title>Applied AI, Technical Lead - Forward Deployed AI Engineer</Title>
      <Description><![CDATA[<p>About the Job:</p>
<p>Mistral AI is seeking a Technical Lead, Applied AI to drive the technical strategy, execution, and delivery of complex AI solutions for our enterprise customers.</p>
<p>In this role, you will lead a project team of Applied AI Engineers, ensuring the successful deployment of Mistral AI products and the development of high-impact, scalable AI use cases.</p>
<p>You will act as the primary technical point of contact for our most strategic customers, guiding them through the entire lifecycle—from pre-sales to post-implementation—while collaborating closely with research, product, and engineering teams to shape the future of our offerings.</p>
<p>As a Technical Lead, you will bridge the gap between cutting-edge AI research and real-world enterprise applications, ensuring our solutions are robust, scalable, and aligned with both customer needs and Mistral’s technological vision.</p>
<p>What you will do:</p>
<ul>
<li><p>Deliver as an IC the critical lines of code of our complex projects, you’ll be hands-on and de-risk the critical parts of our complex projects.</p>
</li>
<li><p>Lead technical teams of Applied AI Engineers, providing mentorship, technical guidance, and best practices for deploying state-of-the-art GenAI applications across industries.</p>
</li>
<li><p>Lead technical discussions during pre-sales, translating customer requirements into actionable solutions and communicating Mistral’s technological advantages to diverse stakeholders.</p>
</li>
<li><p>Design and oversee the implementation of complex AI systems, including fine-tuning, RAG, agentic workflows, and custom LLM applications, ensuring alignment with Mistral’s product roadmap and open-source initiatives.</p>
</li>
<li><p>Drive innovation by identifying emerging trends in AI, evaluating new tools and methodologies, and championing best practices for fine-tuning, inference, and deployment.</p>
</li>
<li><p>Work closely with product managers, researchers, and engineers to ensure seamless integration of customer feedback into Mistral’s product development cycle.</p>
</li>
</ul>
<p>About you:</p>
<ul>
<li><p>You hold a PhD or Master’s degree in AI, Machine Learning, Computer Science, or a related field.</p>
</li>
<li><p>You have 7/8+ years of experience in AI/ML, with at least 2+ years in a technical leadership role (e.g., Tech Lead, Engineering Manager, or Solutions Architect) focused on AI products or enterprise solutions.</p>
</li>
<li><p>You have a proven track record of leading teams to deliver complex AI projects, from prototyping to production, in industries such as tech, finance, healthcare, or industrial automation.</p>
</li>
<li><p>You possess deep expertise in fine-tuning LLMs, advanced RAG, agentic systems, and deploying NLP applications at scale.</p>
</li>
<li><p>You are proficient in Python, PyTorch, and modern AI frameworks (e.g., LangChain, Hugging Face).</p>
</li>
<li><p>Experience with cloud platforms (AWS, GCP, Azure) and MLOps tools is a plus.</p>
</li>
<li><p>You have strong software engineering skills, including API design, backend/full-stack development, and system architecture.</p>
</li>
<li><p>You excel in technical communication, with the ability to articulate complex concepts to both technical and non-technical audiences, including executives and engineers.</p>
</li>
<li><p>You thrive in fast-paced, collaborative environments and are passionate about mentoring and growing technical talent.</p>
</li>
</ul>
<p>Ideally, you have:</p>
<ul>
<li><p>Contributed to open-source projects, particularly in the LLM or AI space.</p>
</li>
<li><p>Experience in customer-facing roles (e.g., Solutions Architect, Customer Engineer, or Technical Product Manager) with a focus on enterprise AI adoption.</p>
</li>
<li><p>A track record of driving technical strategy and influencing product direction based on customer needs and market opportunities.</p>
</li>
</ul>
<p>Why join us?</p>
<p>You’ll have the opportunity to shape the future of AI adoption in enterprises, work with a world-class team, and contribute to open-source projects that impact millions.</p>
<p>If you’re excited about leading technical innovation and solving real-world challenges with AI, we’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, LangChain, Hugging Face, Cloud platforms (AWS, GCP, Azure), MLOps tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, optimized, open-source, and cutting-edge AI models, products, and solutions for enterprise and personal use.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/ebfdc0da-13fd-4ae9-9861-bedb5ff493ea</Applyto>
      <Location>New York, NY</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>6ef094ae-1e3</externalid>
      <Title>AI Scientist - Paris/London - Onsite or Hybrid or Remote</Title>
      <Description><![CDATA[<p>At Mistral, we are on a mission to democratize AI, producing frontier intelligence for everyone, developed in the open, and built by engineers all over the world.</p>
<p>We are hiring experts in the training of large language models and distributed systems. Join us to be part of a pioneering company shaping the future of AI.</p>
<p>Responsibilities:</p>
<ul>
<li>Research and develop novel methods to push the frontier of large language models</li>
<li>Work across use cases (e.g reasoning, code, agents) and modalities (e.g text, image and speech)</li>
<li>Build tooling and infrastructure to allow training, evaluation and analysis of AI models at scale</li>
<li>Work cross-functionally with other scientists, engineers and product teams to ship AI systems which have a real-world impact</li>
</ul>
<p>About you:</p>
<ul>
<li>You are a highly proficient software engineer in at least one programming language (Python or other, e.g. Rust, Go, Java)</li>
<li>You have hands-on experience with AI frameworks (e.g. PyTorch, JAX) or distributed systems (e.g. Ray, Kubernetes)</li>
<li>You have high engineering competence. This means being able to design complex software and make it usable in production</li>
<li>You are a self-starter, autonomous and a team player</li>
</ul>
<p>Now, it would be ideal if:</p>
<ul>
<li>You have hands-on experience with training large transformer models in a distributed fashion</li>
<li>You are able to navigate the full MLOps stack, for instance, fine-tuning, evaluation and deployment</li>
<li>You have a strong publication record in a relevant scientific domain</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Food: Daily lunch vouchers</li>
<li>Sport: Monthly contribution to a Gympass subscription</li>
<li>Transportation: Monthly contribution to a mobility pass</li>
<li>Health: Full health insurance for you and your family</li>
<li>Parental: Generous parental leave policy</li>
</ul>
<p>France:</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Food: Daily lunch vouchers</li>
<li>Sport: Monthly contribution to a Gympass subscription</li>
<li>Transportation: Monthly contribution to a mobility pass</li>
<li>Health: Full health insurance for you and your family</li>
<li>Parental: Generous parental leave policy</li>
</ul>
<p>UK:</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Insurance</li>
<li>Transportation: Reimburse office parking charges, or 90GBP/month for public transport</li>
<li>Sport: 90GBP/month reimbursement for gym membership</li>
<li>Meal voucher: £200 monthly allowance for its meals</li>
<li>Pension plan: SmartPension (percentages are 5% Employee &amp; 3% Employer)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, JAX, Rust, Go, Java, Ray, Kubernetes, training large transformer models in a distributed fashion, MLOps stack, fine-tuning, evaluation and deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral develops models for the enterprise and for consumers, focusing on delivering systems which can integrate into our daily lives.</Employerdescription>
      <Employerwebsite>https://www.mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/675b7f06-a76b-4144-af0c-4dd3282ef489</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>34dcb379-23a</externalid>
      <Title>Applied AI, Forward Deployed Machine Learning Engineer - (Internship)</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global company with teams distributed between France, USA, UK, Germany, and Singapore. Our comprehensive AI platform meets enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.</p>
<p>Role Summary</p>
<p>As an Applied Engineering Intern, you will work closely with our Applied AI Engineering team to facilitate the adoption of Mistral AI products among customers and collaborate with them to address complex technical challenges. This role is based in Paris, with an internship duration of 3 to 6 months. We are open to CIFRE programs as a continuation after the internship.</p>
<p>Responsibilities</p>
<p>• Contribute to the deployment of state-of-the-art GenAI applications, driving technological transformation with our customers.</p>
<p>• Collaborate with researchers, AI engineers, and product engineers on complex customer projects.</p>
<p>• Work with the product and science team to continuously improve our product and model capabilities based on customer feedback.</p>
<p>How We Work in Applied AI</p>
<p>• We care about people and outputs.</p>
<p>• What matters is what you ship, not the time you spend on it.</p>
<p>• Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to. The best idea wins, whether it comes from a principal engineer or someone in their first week.</p>
<p>• Always ask why. The best solutions come from deep understanding, not from copying what worked before.</p>
<p>• We say what we mean. Feedback is direct, timely, and given because we care.</p>
<p>• No politics. Low ego, high standards.</p>
<p>• We embrace an unstructured environment and find joy in it.</p>
<p>About You</p>
<p>• You are currently pursuing a degree in AI, data science, or a related field from a tier 1 engineering school or university.</p>
<p>• You have strong programming skills in Python.</p>
<p>• You are familiar with machine learning algorithms and natural language processing techniques.</p>
<p>• You hold basic understanding of MLOps and deploying machine learning use cases.</p>
<p>• You have good communication skills with the ability to explain technical concepts to both technical and non-technical audiences.</p>
<p>Ideally You Have:</p>
<p>• Experience with deep learning frameworks such as PyTorch.</p>
<p>• Familiarity with version control systems (e.g., Git) and Linux shell environment.</p>
<p>• Experience working in HPC Environments.</p>
<p>• Publication record in AI or a related field.</p>
<p>Benefits</p>
<p>• Competitive salary</p>
<p>• Food: Daily lunch vouchers</p>
<p>• Sport: Monthly contribution to a Gympass subscription</p>
<p>• Transportation: Monthly contribution to a mobility pass</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Machine learning algorithms, Natural language processing techniques, MLOps, Deep learning frameworks (PyTorch), Version control systems (Git), Linux shell environment, HPC Environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and solutions for enterprise use.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/881941e1-2741-48e2-8767-12866965fac5</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>f3d07174-85f</externalid>
      <Title>Applied AI, Technical Lead, Forward Deployed AI Engineer - EMEA</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global organisation with teams distributed between France, USA, UK, Germany, and Singapore. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments.</p>
<p>Our offerings include le Chat, the AI assistant for life and work.</p>
<p>About The Job</p>
<p>Mistral AI is seeking a Technical Lead, Applied AI to drive the technical strategy, execution, and delivery of complex AI solutions for our enterprise customers.</p>
<p>In this role, you will lead a project teams of Applied AI Engineers, ensuring the successful deployment of Mistral AI products and the development of high-impact, scalable AI use cases.</p>
<p>You will act as the primary technical point of contact for our most strategic customers, guiding them through the entire lifecycle—from pre-sales to post-implementation—while collaborating closely with research, product, and engineering teams to shape the future of our offerings.</p>
<p>As a Technical Lead, you will bridge the gap between cutting-edge AI research and real-world enterprise applications, ensuring our solutions are robust, scalable, and aligned with both customer needs and Mistral’s technological vision.</p>
<p>Responsibilities</p>
<ul>
<li><p>Deliver as an IC the critical lines of codes of our complex projects, you’ll be hands-on and de-risk the critical parts of our complex projects. You’ll stay deeply involved in coding, reviewing, and optimizing AI solutions.</p>
</li>
<li><p>Lead technical teams of Applied AI Engineers, providing mentorship, technical guidance, and best practices for deploying state-of-the-art GenAI applications across industries.</p>
</li>
<li><p>Lead technical discussions during pre-sales, translating customer requirements into actionable solutions and communicating Mistral’s technological advantages to diverse stakeholders.</p>
</li>
<li><p>Design and oversee the implementation of complex AI systems, including fine-tuning, RAG, agentic workflows, and custom LLM applications, ensuring alignment with Mistral’s product roadmap and open-source initiatives.</p>
</li>
<li><p>Drive innovation by identifying emerging trends in AI, evaluating new tools and methodologies, and championing best practices for fine-tuning, inference, and deployment.</p>
</li>
<li><p>Work closely with product managers, researchers, and engineers to ensure seamless integration of customer feedback into Mistral’s product development cycle.</p>
</li>
</ul>
<p>How We Work in Applied AI</p>
<ul>
<li><p>We care about people and outputs. What matters is what you ship, not the time you spend on it.</p>
</li>
<li><p>Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to. The best idea wins, whether it comes from a principal engineer or someone in their first week.</p>
</li>
<li><p>Always ask why. The best solutions come from deep understanding, not from copying what worked before.</p>
</li>
<li><p>We say what we mean. Feedback is direct, timely, and given because we care.</p>
</li>
<li><p>No politics. Low ego, high standards.</p>
</li>
<li><p>We embrace an unstructured environment and find joy in it.</p>
</li>
</ul>
<p>Requirements</p>
<ul>
<li><p>You are fluent in English.</p>
</li>
<li><p>You hold a PhD or Master’s degree in AI, Machine Learning, Computer Science, or a related field.</p>
</li>
<li><p>You have 7/8+ years of experience in AI/ML, with at least 2+ years in a technical leadership role (e.g., Tech Lead, Engineering Manager, or Solutions Architect) focused on AI products or enterprise solutions.</p>
</li>
<li><p>You have a proven track record of leading teams to deliver complex AI projects, from prototyping to production, in industries such as tech, finance, healthcare, or industrial automation.</p>
</li>
<li><p>You possess deep expertise in fine-tuning LLMs, advanced RAG, agentic systems, and deploying NLP applications at scale.</p>
</li>
<li><p>You are proficient in Python, PyTorch, and modern AI frameworks (e.g., LangChain, Hugging Face). Experience with cloud platforms (AWS, GCP, Azure) and MLOps tools is a plus.</p>
</li>
<li><p>You have strong software engineering skills, including API design, backend/full-stack development, and system architecture.</p>
</li>
<li><p>You excel in technical communication, with the ability to articulate complex concepts to both technical and non-technical audiences, including executives and engineers.</p>
</li>
<li><p>You thrive in fast-paced, collaborative environments and are passionate about mentoring and growing technical talent.</p>
</li>
</ul>
<p>Ideally, you have:</p>
<ul>
<li><p>Contributed to open-source projects, particularly in the LLM or AI space.</p>
</li>
<li><p>Experience in customer-facing roles (e.g., Solutions Architect, Customer Engineer, or Technical Product Manager) with a focus on enterprise AI adoption.</p>
</li>
<li><p>A track record of driving technical strategy and influencing product direction based on customer needs and market opportunities.</p>
</li>
</ul>
<p>Why join us?</p>
<p>You’ll have the opportunity to shape the future of AI adoption in enterprises, work with a world-class team, and contribute to open-source projects that impact millions. If you’re excited about leading technical innovation and solving real-world challenges with AI, we’d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, LangChain, Hugging Face, Cloud platforms (AWS, GCP, Azure), MLOps tools, API design, Backend/full-stack development, System architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is a technology company that develops and provides AI solutions for enterprise customers.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/e2cf255f-49c8-4630-afe0-7f665f51f01f</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>1d181b68-6e2</externalid>
      <Title>Engineering Manager</Title>
      <Description><![CDATA[<p>We are seeking a passionate and highly skilled Engineering Manager to join our team and drive the development of mission-critical systems powering our AI platform and products : AI Studio, Le Chat and Mistral Vibe.</p>
<p>You will combine deep technical expertise with strong leadership to build high-impact scalable systems while growing and guiding a high-performing engineering team. On average, our team leads are responsible for teams of about five people for a specific product, owning both execution and delivery while remaining hands-on with code.</p>
<p>You will split your time dynamically between individual contribution and leadership, with the balance shifting as your team grows. You will collaborate closely with Product Managers and ship features that directly impact our users and business.</p>
<p>You will be responsible for creating a strong team culture, setting clear processes, raising the engineering bar and ensuring consistent high-quality delivery.</p>
<p>This is a highly critical role at the intersection of product engineering infrastructure and people. You will help shape how our engineering organization works, how we build at scale and how we turn cutting-edge AI research into reliable production-grade systems used by millions of developers and end users.</p>
<p>If you enjoy building complex systems, leading ambitious engineers and shipping products that matter, this role is for you.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Act as the clear point of contact for your team and its projects. Remove obstacles, unblock execution and create an environment where engineers can focus on building impactful high-quality products.</li>
</ul>
<ul>
<li>Own the delivery of your team’s roadmap and ensure projects are shipped with the right level of quality reliability and performance. Keep the team accountable for results and maintain strong execution momentum.</li>
</ul>
<ul>
<li>Partner closely with Product Managers to define the roadmap and priorities of your team. Scope initiatives, estimate engineering effort and drive prioritization decisions based on business and technical impact. Keep the whole team involved and engaged in the product direction.</li>
</ul>
<ul>
<li>Stay deeply involved in architecture design, system design, coding, peer reviews and technical decision-making. Lead by example and remain connected to the day-to-day challenges of the team.</li>
</ul>
<ul>
<li>Define and evolve team processes, best practices and execution rituals with the team. Ensure they remain relevant as the team and product scale.</li>
</ul>
<ul>
<li>Challenge engineers to step up, take ownership and grow beyond their comfort zone. Increase the team’s bus factor through knowledge sharing, documentation and rotation of responsibilities.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>5+ years of relevant professional work experience.</li>
</ul>
<ul>
<li>Master’s degree in Computer Science, Information Technology or a related field.</li>
</ul>
<ul>
<li>Proficiency in one of the following languages: Python, JavaScript/TypeScript, C#, Golang.</li>
</ul>
<ul>
<li>Experience with building and leading high-performing teams.</li>
</ul>
<ul>
<li>Experience with working with cross-functional teams like product, design, business, etc.</li>
</ul>
<ul>
<li>Experience with project management and planning.</li>
</ul>
<ul>
<li>Ownership and capacity to ship products end-to-end.</li>
</ul>
<ul>
<li>Strong problem-solving abilities and attention to detail.</li>
</ul>
<ul>
<li>Excellent communication skills.</li>
</ul>
<ul>
<li>Low ego and team spirit mindset.</li>
</ul>
<ul>
<li>Autonomous and self-starter.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Experience with Platform &amp; DX products (API, SDK, Tooling, Observability, Billing, etc.).</li>
</ul>
<ul>
<li>AI/ML/MLOps engineering.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, JavaScript/TypeScript, C#, Golang, AI/ML/MLOps engineering, Platform &amp; DX products, API, SDK, Tooling, Observability, Billing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is a company that develops AI technology to simplify tasks, save time, and enhance learning and creativity. Its comprehensive AI platform meets enterprise needs, whether on-premises or in cloud environments.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/add3ec37-a655-4a60-8823-1e871aa1e9b2</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>1aefaf93-fe1</externalid>
      <Title>AI Scientist</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global team with a presence in France, USA, UK, Germany, and Singapore. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Research and develop novel methods to push the frontier of large language models</li>
<li>Work across use cases (e.g., reasoning, code, agents) and modalities (e.g., text, image, and speech)</li>
<li>Build tooling and infrastructure to allow training, evaluation, and analysis of AI models at scale</li>
<li>Work cross-functionally with other scientists, engineers, and product teams to ship AI systems that have a real-world impact</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>You are a highly proficient software engineer in at least one programming language (Python or other, e.g., Rust, Go, Java)</li>
<li>You have hands-on experience with AI frameworks (e.g., PyTorch, JAX) or distributed systems (e.g., Ray, Kubernetes)</li>
<li>You have high engineering competence. This means being able to design complex software and make it usable in production</li>
<li>You are a self-starter, autonomous, and a team player</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>You have hands-on experience with training large transformer models in a distributed fashion</li>
<li>You are able to navigate the full MLOps stack, for instance, fine-tuning, evaluation, and deployment</li>
<li>You have a strong publication record in a relevant scientific domain</li>
</ul>
<p>Benefits</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Food: Monthly meal allowance</li>
<li>Sport: Monthly contribution to a Gympass subscription</li>
<li>Private pension plan</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, JAX, Rust, Go, Java, Ray, Kubernetes, Training large transformer models in a distributed fashion, Navigating the full MLOps stack, Strong publication record in a relevant scientific domain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI develops and provides high-performance, open-source AI models, products, and solutions for enterprise and personal use.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/bedfc2aa-f1b6-4136-bd17-b3abe4c06120</Applyto>
      <Location>Zurich</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>82ef5cbb-727</externalid>
      <Title>AI Scientist</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise as well as personal needs. Our offerings include Le Chat, La Plateforme, Mistral Code and Mistral Compute - a suite that brings frontier intelligence to end-users.</p>
<p>Responsibilities</p>
<p>• Research and develop novel methods to push the frontier of large language models
• Work across use cases (e.g reasoning, code, agents) and modalities (e.g text, image and speech)
• Build tooling and infrastructure to allow training, evaluation and analysis of AI models at scale
• Work cross-functionally with other scientists, engineers and product teams to ship AI systems which have a real-world impact</p>
<p>About You</p>
<p>• You are a highly proficient software engineer in at least one programming language (Python or other, e.g. Rust, Go, Java)
• You have hands-on experience with AI frameworks (e.g. PyTorch, JAX) or distributed systems (e.g. Ray, Kubernetes)
• You have high engineering competence. This means being able to design complex software and make it usable in production
• You are a self-starter, autonomous and a team player</p>
<p>Now, it would be ideal if</p>
<p>• You have hands-on experience with training large transformer models in a distributed fashion
• You are able to navigate the full MLOps stack, for instance, fine-tuning, evaluation and deployment
• You have a strong publication record in a relevant scientific domain</p>
<p>Benefits</p>
<p>• Competitive cash salary and equity
• Food: Monthly meal allowance
• Sport: Monthly contribution to a Gympass subscription
• Private pension plan</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineer, AI frameworks, distributed systems, high engineering competence, self-starter, autonomous, team player, hands-on experience with training large transformer models in a distributed fashion, navigate the full MLOps stack, strong publication record in a relevant scientific domain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI develops and provides high-performance, open-source AI models, products, and solutions for enterprise and personal use.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/4e498cbf-151e-483a-b3f7-76ff64a22041</Applyto>
      <Location>Warsaw</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>5a15e1d6-e69</externalid>
      <Title>AI Scientist - Robotics</Title>
      <Description><![CDATA[<p>About this role</p>
<p>We are hiring experts in the training of large language models and distributed systems to join our research team.</p>
<p>Key responsibilities:</p>
<ul>
<li>Research and develop novel AI methods for general-purpose mobile manipulation robots</li>
<li>Build tooling and infrastructure to allow training, evaluation and analysis of AI models at scale</li>
<li>Work cross-functionally with other scientists, engineers and product teams to deploy AI systems on real robot platforms</li>
</ul>
<p>About you</p>
<ul>
<li>You have hands-on experience either building and deploying AI systems on physical robots or developing large vision-language models</li>
<li>You are a highly proficient software engineer in at least one programming language (preferably Python)</li>
<li>You have hands-on experience with AI frameworks (preferably PyTorch)</li>
<li>You have high engineering competence. This means being able to design complex software and make it usable in production</li>
</ul>
<p>Ideal candidates will have experience in one or more of the following:</p>
<ul>
<li>Navigation, manipulation, simulators, 3D, embodied reasoning or vision-language-action models</li>
<li>The full MLOps stack, for instance, fine-tuning, evaluation and deployment</li>
</ul>
<p>Benefits in France</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Food: Daily lunch vouchers</li>
<li>Sport: Monthly contribution to a Gympass subscription</li>
<li>Transportation: Monthly contribution to a mobility pass</li>
<li>Health: Full health insurance for you and your family</li>
<li>Parental: Generous parental leave policy</li>
<li>Visa sponsorship</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI, Robotics, Python, PyTorch, Software Engineering, Navigation, Manipulation, Simulators, 3D, Embodied Reasoning, Vision-Language-Action Models, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral develops models for the enterprise and for consumers, focusing on delivering systems which can integrate into daily lives.</Employerdescription>
      <Employerwebsite>https://www.mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/60f9dc5b-6d1c-4236-be38-be7233669f00</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>77b20c01-867</externalid>
      <Title>AI Scientist - Audio</Title>
      <Description><![CDATA[<p>About Mistral</p>
<p>At Mistral, we are on a mission to democratize AI, producing frontier intelligence for everyone, developed in the open, and built by engineers all over the world.</p>
<p>We develop models for the enterprise and for consumers, focusing on delivering systems which can really change the way in which businesses operate and which can integrate into our daily lives. All while releasing frontier models open-source, for everyone to try and benefit.</p>
<p>What will you do?</p>
<ul>
<li>Research and develop novel methods to push the frontier of large language models</li>
<li>Work across use cases (e.g reasoning, code, agents) and modalities (e.g text, image and speech)</li>
<li>Build tooling and infrastructure to allow training, evaluation and analysis of AI models at scale</li>
<li>Work cross-functionally with other scientists, engineers and product teams to ship AI systems which have a real-world impact</li>
</ul>
<p>About you</p>
<ul>
<li>An expert in speech input/output methodologies (specific to audio)</li>
<li>You are a highly proficient software engineer in at least one programming language (Python or other, e.g. Rust, Go, Java)</li>
<li>You have hands-on experience with AI frameworks (e.g. PyTorch, JAX) or distributed systems (e.g. Ray, Kubernetes)</li>
<li>You have high engineering competence. This means being able to design complex software and make it usable in production</li>
<li>You are a self-starter, autonomous and a team player</li>
</ul>
<p>Now, it would be ideal if</p>
<ul>
<li>You have experience working with large-scale speech-language models</li>
<li>You have hands-on experience with training large transformer models in a distributed fashion</li>
<li>You can navigate the full MLOps stack, for instance, fine-tuning, evaluation and deployment</li>
<li>You have a strong publication record in a relevant scientific domain</li>
</ul>
<p>Note that this is not an exhaustive or necessary list of requirements. Please consider applying if you believe you have the skills to contribute to Mistral&#39;s mission. We value profile and experience diversity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive cash salary and equity</Salaryrange>
      <Skills>speech input/output methodologies, Python, PyTorch, JAX, distributed systems, Ray, Kubernetes, high engineering competence, large-scale speech-language models, training large transformer models in a distributed fashion, MLOps stack, fine-tuning, evaluation, deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral develops models for the enterprise and for consumers, focusing on delivering systems which can integrate into our daily lives.</Employerdescription>
      <Employerwebsite>https://www.mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/94173e13-3050-4044-862a-e8dfc2deda5e</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>6186a306-374</externalid>
      <Title>Head of Engineering (AI)</Title>
      <Description><![CDATA[<p><strong>Head of Engineering (AI)</strong></p>
<p>At Fuse Energy, we&#39;re on a mission to deliver a terawatt of renewable energy - fast. We&#39;re combining first-principles thinking with cutting-edge technology to build a radically better energy system.</p>
<p>As the Head of Engineering (AI) at Fuse Energy, you will lead the development and integration of AI across our platform—from intelligent forecasting models and optimization algorithms to personalized customer experiences and internal automation.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the AI engineering roadmap and lead the development of AI-first features</li>
<li>Productionize ML models, ensuring scalability, performance, and observability</li>
<li>Design the infrastructure for deploying and maintaining ML systems in production (e.g., MLOps, CI/CD for ML, model versioning)</li>
<li>Build systems that integrate AI into key parts of our stack, such as:</li>
<li>Forecasting customer demand and renewable generation</li>
<li>Dynamic pricing and energy trading algorithms</li>
<li>Intelligent alerts and personalized customer features</li>
<li>Work closely with product and engineering leadership to identify high-impact AI opportunities</li>
<li>Build and lead a high-performing team of AI engineers</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong software engineering background with 5+ years of experience, including at least 2 years leading AI/ML engineering teams</li>
<li>Deep experience deploying ML models into production environments</li>
<li>Proficiency in designing scalable data pipelines and real-time inference systems</li>
<li>Understanding of modern ML tooling and frameworks (e.g., PyTorch, TensorFlow, MLflow, AWS SageMaker)</li>
<li>Strong cross-functional collaboration skills, particularly with data science and product teams</li>
<li>Clear communication and an ability to prioritize for both experimentation and reliability</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Familiarity with optimization, time series modeling, or forecasting</li>
<li>Experience with large language models (LLMs), RAG, or generative AI in production</li>
<li>Background in MLOps or AI infrastructure at scale</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and an equity sign-on bonus</li>
<li>Biannual bonus scheme</li>
<li>Fully expensed tech to match your needs</li>
<li>Paid annual leave</li>
<li>Breakfast and dinner allowance for office-based employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Strong software engineering background, Deep experience deploying ML models into production environments, Proficiency in designing scalable data pipelines and real-time inference systems, Understanding of modern ML tooling and frameworks, Strong cross-functional collaboration skills, Familiarity with optimization, time series modeling, or forecasting, Experience with large language models (LLMs), RAG, or generative AI in production, Background in MLOps or AI infrastructure at scale</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fuse Energy</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fuse Energy is a renewable energy startup that aims to deliver a terawatt of renewable energy. It has raised $170M from top-tier investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/xz8u7PJwq8wtrGKdHBNFmd/hybrid-head-of-engineering-(ai)-in-dubai-at-fuse-energy</Applyto>
      <Location>Dubai</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ed3b95c5-f87</externalid>
      <Title>FBS SR Analytics Engineer</Title>
      <Description><![CDATA[<p>Our client is a leading US insurer, providing a wide range of insurance and financial services products with gross written premium over US$25 Billion. They serve over 10 million U.S. households with more than 19 million individual policies across all 50 states.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Create and iterate on data products and develop pipelines to provide data on an ongoing basis.</li>
<li>Assist in enhancing data delivery across PL and Distribution.</li>
<li>Assist with pivoting from antiquated technologies to enterprise standards.</li>
<li>Understand, analyze, and translate business data stories into technical stories&#39; breakdown structures.</li>
<li>Design, build, test, and implement data products of varying complexity.</li>
<li>Design, build, and maintain ETL/ELT pipelines using Python and SQL.</li>
<li>Develop data validation scripts in Python.</li>
<li>Write SQL queries to detect anomalies, duplicates, and missing values.</li>
<li>Work closely with data analysts, scientists, and business stakeholders.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3 to 5 years of experience as a Data Engineer.</li>
<li>Full English fluency.</li>
<li>BS in computer Engineering, Information Systems, Data Science, Advanced Analytics, Data Engineering, ML Ops, or similar.</li>
<li>Insurance Background - Desirable.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>This position comes with a competitive compensation and benefits package, including:</p>
<ul>
<li>Competitive salary and performance-based bonuses.</li>
<li>Comprehensive benefits package.</li>
<li>Home Office model.</li>
<li>Career development and training opportunities.</li>
<li>Flexible work arrangements (remote and/or office-based).</li>
<li>Dynamic and inclusive work culture within a globally known group.</li>
<li>Private Health Insurance.</li>
<li>Pension Plan.</li>
<li>Paid Time Off.</li>
<li>Training &amp; Development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, ETL Pipeline Building, Devops, MLOPS, AWS Cloud Experience, Data Governance and Management, Data Mining and Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is one of the world&apos;s largest consulting, technology, and outsourcing companies, with over 340,000 employees in more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/7vBWYchan8XNrZmR744q9d/remote-fbs-sr-analytics-engineer-in-mexico-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>b82d1d93-ffa</externalid>
      <Title>FBS Data Science Director (Finance)</Title>
      <Description><![CDATA[<p>Our Client is one of the United States&#39; largest insurers, providing a wide range of insurance and financial services products with gross written premium well over US$25 Billion (P&amp;C). They serve more than 10 million U.S. households with more than 19 million individual policies across all 50 states through the efforts of over 48,000 exclusive and independent agents and nearly 18,500 employees.</p>
<p><strong>What to expect on your journey with us:</strong></p>
<ul>
<li>A solid company with a strong market presence.</li>
<li>A dynamic, diverse, and collaborative work environment.</li>
<li>Leaders with deep market knowledge and strategic vision.</li>
</ul>
<p><strong>We count on you for:</strong></p>
<ul>
<li>Contribute to advancing the company&#39;s analytics vision during a critical phase of transformation, working alongside other Data &amp; Analytics teams to embed data into the core of strategic decision-making.</li>
<li>Assemble a high-performing, multicultural analytics team from scratch, aligning structure and skills with evolving business needs.</li>
<li>Define and implement scalable processes, tools, and best practices to ensure consistent and high-impact analytics delivery.</li>
<li>Collaborate directly with C-Level stakeholders to define KPIs, shape strategy, and deliver insights that influence top-level decisions.</li>
<li>Lead time-sensitive projects with precision, ensuring insights are both fast and business-relevant.</li>
<li>Convert complex data into clear, actionable recommendations that drive measurable business outcomes.</li>
<li>Provide strategic and technical guidance to data scientists and analysts, fostering a high-performance culture.</li>
<li>Use data to identify opportunities, solve problems, and accelerate growth across business units.</li>
<li>Lead AI/ML model development and deployment, ensuring alignment with business goals and MLOps best practices.</li>
<li>Work closely with IT, engineering, and business teams to optimize data infrastructure and analytics capabilities.</li>
<li>Present complex analytics findings in a compelling and accessible way to both technical and non-technical audiences.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Prior experience in P&amp;C Insurance, Banking or Financial Services.</li>
<li>Advanced SQL and Python for data modeling, forecasting, and auditing.</li>
<li>Strong proficiency in Power BI (Data visualization).</li>
<li>Ability to translate complex data into actionable business recommendations.</li>
<li>Solid foundation in data science and analytics methodologies.</li>
<li>Understanding of data&#39;s role in driving innovation in regulated industries.</li>
<li>Adoption and scalability of analytics processes across the organization.</li>
<li>Team growth and effectiveness across multiple business lines.</li>
<li>Familiarity with metrics such as GWP, NB, Retention Rate, Win Rate, Loss Ratio, PIF, and more.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Power BI, Data Science, Analytics, AI/ML, MLOps</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global consulting and technology services company that provides business and technology consulting, application services, and other services to clients across various industries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/aGeNZxtcn8VnULA32c7cto/remote-fbs-data-science-director-(finance)-in-brazil-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
  </jobs>
</source>