<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>8ba551e0-be3</externalid>
      <Title>Data Analyst - Physical Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a Data Analyst to join xAI&#39;s Infrastructure team responsible for building and operating world-class datacenters and power generation facilities. In this role, you will analyse power and cooling performance data, develop forecasts for utility consumption and costs, build and maintain business analytics dashboards, and deliver data-driven insights to optimise our rapidly expanding physical infrastructure for AI supercomputing.</p>
<p>Responsibilities: Collect, clean, integrate, and analyse high-volume power, cooling, and energy usage data from datacentre facilities and power plants Build and refine forecasting models for electricity, water, and other utility consumption to support budgeting, planning, and procurement Design, develop, and maintain interactive business intelligence dashboards and reports using tools such as Seeq, Tableau, Power BI, Looker, or similar Identify trends, anomalies, inefficiencies, and optimisation opportunities in power distribution and cooling systems Partner with mechanical, electrical, and facilities engineering teams to translate analytical findings into engineering and operational improvements Support infrastructure expansion planning through scenario analysis, capacity modelling, and cost projections Automate data collection pipelines and reporting processes to enable real-time visibility and decision making Present clear, actionable insights and recommendations to cross-functional teams and leadership</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>SQL, Python, Tableau, Power BI, Looker, Statistics, Time-series analysis, Forecasting techniques, Energy, Utilities, Datacentres, Critical infrastructure, Industrial facilities, SCADA systems, Building management systems, IoT sensor data, Cloud data platforms, Power systems, HVAC/cooling efficiency metrics, Energy modelling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The organisation operates with a flat structure and expects employees to be hands-on.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5112514007</Applyto>
      <Location>Memphis, TN</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>e54c7ca9-9be</externalid>
      <Title>Sales Development Representative Middle East</Title>
      <Description><![CDATA[<p>We are looking for a top-performing Sales Development Representative to join our team in the Middle East. As a Sales Development Representative, you will be responsible for reaching out to prospective customers, tracking and nurturing your outbound activity, and qualifying leads to determine if they are a good fit for Starburst Data.</p>
<p>Your day-to-day tasks will include researching and pursuing new sales opportunities, developing and executing sales strategies, and collaborating with our Account Executives to close deals.</p>
<p>To succeed in this role, you will need to have a basic understanding of our offerings and be able to communicate effectively with potential customers. You will also need to be able to work independently and as part of a team, prioritizing tasks and managing your time effectively.</p>
<p>In return, you will have the opportunity to learn and grow with our company, develop your skills and expertise, and earn a competitive salary and benefits package.</p>
<p>Responsibilities:</p>
<ul>
<li>Reach out to prospective customers in the Middle East and Emerging Markets with a well-researched hypothesis on how we can solve their pain</li>
</ul>
<ul>
<li>Meticulously track and nurture your outbound activity</li>
</ul>
<ul>
<li>Nimbly interact with potential customers through various points of contact including attending in person meetings, trainings, and trade shows with confidence and ease</li>
</ul>
<ul>
<li>Learn the basics of prospecting and discovery to help jumpstart your career in sales</li>
</ul>
<ul>
<li>Meet and exceed monthly, quarterly, and annual lead generation quotas</li>
</ul>
<ul>
<li>Outbound prospecting through email, phone calls, and social media to schedule meetings for the Sales team</li>
</ul>
<ul>
<li>Qualify leads and assess their needs to determine if they are a good fit for Starburst Data</li>
</ul>
<p>Requirements:</p>
<ul>
<li>6-12 months relevant BDR / SDR experience in tech</li>
</ul>
<ul>
<li>Native Arabic speaker</li>
</ul>
<ul>
<li>Have an innate curiosity about how data is changing the world</li>
</ul>
<ul>
<li>Enjoy a challenge and getting out of your comfort zone to aid in your personal and professional development/growth</li>
</ul>
<ul>
<li>Adeptly prioritize and reprioritize based on the evolving demands of other departments</li>
</ul>
<ul>
<li>Thrive in the unknown – and have a track record to prove it</li>
</ul>
<ul>
<li>Examples of where you have displayed your passion and perseverance to accomplish a long term goal</li>
</ul>
<ul>
<li>You want a career in sales and see this as an excellent way to learn and prove yourself</li>
</ul>
<ul>
<li>Business Acumen - you have built this through academic or professional experiences</li>
</ul>
<ul>
<li>You are an Entrepreneur at heart</li>
</ul>
<ul>
<li>Ability to Travel: This role will require 25% in-person travel for purposes including but not limited to new hire onboarding, team and department offsites, customer engagements, and other company events.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platform, analytics, applications, AI, software sales, prospecting, discovery, lead generation, customer engagement</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Starburst</Employername>
      <Employerlogo>https://logos.yubhub.co/starburst.io.png</Employerlogo>
      <Employerdescription>Starburst is a software company that provides a data platform for analytics, applications, and AI, unifying data across clouds and on-premises. It serves organizations worldwide.</Employerdescription>
      <Employerwebsite>https://www.starburst.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/starburst/jobs/5191922008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>d05f2d69-fce</externalid>
      <Title>AI Product Engineer - Agentic AI Platforms (Financial Services)</Title>
      <Description><![CDATA[<p>We are seeking an experienced and innovative AI Product Engineer – Agentic Platforms to join our Financial Services Artificial Intelligence &amp; Business Lines (FS-ABL) practice. This role is ideal for a consulting technologist with deep expertise in modern GenAI tooling, agentic system design, and enterprise SDLC, who can partner directly with clients to envision, design, develop, and deploy Agentic AI platforms in regulated environments.</p>
<p>In this role, you will work at the intersection of client advisory, AI product engineering, and delivery execution, helping banks, insurers, and capital markets firms transition from GenAI pilots to production-grade, governed, multi-agent systems. You will apply leading GenAI frameworks and LLM platforms , including Anthropic, OpenAI, LangChain, LangGraph, DSPy, and vector databases,while operating across the full Agentic SDLC.</p>
<p>P&amp;C Insurance knowledge and experience is a significant plus. Additionally, familiarity with core insurance platforms like Guidewire, DuckCreek or Majesco will be extremely helpful to succeed in this role.</p>
<p>We are looking for candidates across all levels of experience and expertise - junior through senior level AI Product Engineers.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Partner directly with Financial Services clients to identify, prioritize, and shape Agentic AI use cases across customer operations, underwriting, claims, risk, compliance, finance, and technology.</li>
<li>Lead client workshops to define agent personas, responsibilities, autonomy boundaries, human-in-the-loop checkpoints, and escalation logic.</li>
<li>Translate evolving business needs into agentic product backlogs, roadmaps, and MVP definitions.</li>
<li>Support executive conversations around GenAI platform strategy, operating models, vendor selection, and scale-out approaches.</li>
</ul>
<p><strong>Agentic Platform &amp; Architecture Design</strong></p>
<ul>
<li>Design and implement multi-agent architectures using modern GenAI tooling, including:</li>
<li>Planner, executor, reviewer/critic, and supervisor agents</li>
<li>Tool-calling and function-calling agents</li>
<li>Memory-enabled agents (conversation, semantic, episodic, and structured memory)</li>
<li>Leverage LangChain and LangGraph for agent orchestration, workflows, and control flow.</li>
<li>Apply DSPy and declarative prompt optimization techniques for repeatability, performance tuning, and regression control.</li>
<li>Design agent interaction patterns such as hierarchical agents, collaborating agents, and event-driven agent workflows.</li>
<li>Define standardized agent contracts, interfaces, and schemas to enable reuse and scale.</li>
</ul>
<p><strong>Agentic SDLC &amp; Engineering Delivery</strong></p>
<ul>
<li>Own delivery across the full Software Development Lifecycle (SDLC), extending it into a formal Agentic SDLC, including:</li>
<li>Agent design specifications and behavior contracts</li>
<li>Prompt, policy, and tool versioning</li>
<li>Simulation environments and offline evaluation</li>
<li>Automated testing of agent flows and guardrails</li>
<li>Controlled rollout, telemetry-driven optimization, and continuous learning</li>
<li>Build production-grade AI services primarily using Python, integrating:</li>
<li>LLM providers such as Anthropic (Claude), OpenAI, and open-source models</li>
<li>Retrieval-Augmented Generation (RAG) using vector databases (e.g., Pinecone, FAISS, Milvus, Weaviate)</li>
<li>Implement CI/CD pipelines for agent code, prompts, and policies.</li>
<li>Integrate GenAI agents with client systems via APIs, workflow engines, event streams, and data platforms.</li>
</ul>
<p><strong>Observability, Evaluation &amp; Optimization</strong></p>
<ul>
<li>Implement agent observability including tracing, decision logging, tool usage, and failure analysis.</li>
<li>Apply evaluation frameworks for hallucination detection, consistency checks, and fitness scoring.</li>
<li>Design feedback loops incorporating human-in-the-loop review and reinforcement.</li>
<li>Monitor cost, latency, throughput, and behavioral drift across deployed agents.</li>
</ul>
<p><strong>Governance, Risk &amp; Financial Services Compliance</strong></p>
<ul>
<li>Design Agentic AI platforms aligned with Financial Services regulatory expectations, including:</li>
<li>Auditability and traceability of agent decisions</li>
<li>Model and prompt explainability</li>
<li>Data privacy and security controls</li>
<li>Resilience and fail-safe mechanisms</li>
<li>Embed guardrails and policies addressing hallucination risk, bias, unauthorized actions, and escalation failures.</li>
<li>Produce documentation supporting risk, compliance, internal audit, and regulator engagement.</li>
</ul>
<p><strong>Team Leadership &amp; Firm Contribution</strong></p>
<ul>
<li>Provide technical leadership and mentorship to consulting delivery teams.</li>
<li>Contribute to internal GenAI accelerators, agent frameworks, and reusable assets.</li>
<li>Support RFPs, proposals, and client solution designs with credible GenAI and agentic architectures.</li>
<li>Participate in thought leadership on Agentic SDLC, GenAI engineering, and responsible autonomy.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, GenAI, LLM, LangChain, LangGraph, DSPy, vector databases, APIs, workflow engines, event streams, data platforms, Agentic SDLC, agent design, agent architecture, agent interaction, agent contracts, interfaces, schemas, prompt optimization, performance tuning, regression control, CI/CD pipelines, agent code, prompts, policies, GenAI agents, client systems, traceability, decision logging, tool usage, failure analysis, hallucination detection, consistency checks, fitness scoring, human-in-the-loop review, reinforcement, cost, latency, throughput, behavioral drift, auditability, model explainability, data privacy, security controls, resilience, fail-safe mechanisms, guardrails, risk management, compliance, internal audit, regulator engagement</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global leader in partnering with companies to transform and manage their business by harnessing the power of technology, with a diverse collective of nearly 350,000 strategic and technological experts across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/nNAFrJUQSrP1dcSBxRDpM5/hybrid-ai-product-engineer---agentic-ai-platforms-(financial-services)-in-new-york-at-capgemini</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5c7e3c9c-ece</externalid>
      <Title>AI Product Engineer - Agentic AI Platforms (Financial Services)</Title>
      <Description><![CDATA[<p>Capgemini is at the forefront of Generative AI innovation, helping Financial Services clients industrialize GenAI and Agentic AI platforms at enterprise scale.</p>
<p>We are seeking an experienced and innovative AI Product Engineer – Agentic Platforms to join our Financial Services Artificial Intelligence &amp; Business Lines (FS-ABL) practice. This role is ideal for a consulting technologist with deep expertise in modern GenAI tooling, agentic system design, and enterprise SDLC, who can partner directly with clients to envision, design, develop, and deploy Agentic AI platforms in regulated environments.</p>
<p>In this role, you will work at the intersection of client advisory, AI product engineering, and delivery execution, helping banks, insurers, and capital markets firms transition from GenAI pilots to production-grade, governed, multi-agent systems. You will apply leading GenAI frameworks and LLM platforms , including Anthropic, OpenAI, LangChain, LangGraph, DSPy, and vector databases,while operating across the full Agentic SDLC.</p>
<p>P&amp;C Insurance knowledge and experience is a significant plus. Additionally, familiarity with core insurance platforms like Guidewire, DuckCreek or Majesco will be extremely helpful to succeed in this role.</p>
<p>We are looking for candidates across all levels of experience and expertise - junior through senior level AI Product Engineers.</p>
<p><strong>Responsibilities</strong></p>
<p>Client Advisory &amp; Product Vision</p>
<p>Partner directly with Financial Services clients to identify, prioritize, and shape Agentic AI use cases across customer operations, underwriting, claims, risk, compliance, finance, and technology.</p>
<p>Lead client workshops to define agent personas, responsibilities, autonomy boundaries, human-in-the-loop checkpoints, and escalation logic.</p>
<p>Translate evolving business needs into agentic product backlogs, roadmaps, and MVP definitions.</p>
<p>Support executive conversations around GenAI platform strategy, operating models, vendor selection, and scale-out approaches.</p>
<p>Agentic Platform &amp; Architecture Design</p>
<p>Design and implement multi-agent architectures using modern GenAI tooling, including:</p>
<p>Planner, executor, reviewer/critic, and supervisor agents</p>
<p>Tool-calling and function-calling agents</p>
<p>Memory-enabled agents (conversation, semantic, episodic, and structured memory)</p>
<p>Leverage LangChain and LangGraph for agent orchestration, workflows, and control flow.</p>
<p>Apply DSPy and declarative prompt optimization techniques for repeatability, performance tuning, and regression control.</p>
<p>Design agent interaction patterns such as hierarchical agents, collaborating agents, and event-driven agent workflows.</p>
<p>Define standardized agent contracts, interfaces, and schemas to enable reuse and scale.</p>
<p>Agentic SDLC &amp; Engineering Delivery</p>
<p>Own delivery across the full Software Development Lifecycle (SDLC), extending it into a formal Agentic SDLC, including:</p>
<p>Agent design specifications and behavior contracts</p>
<p>Prompt, policy, and tool versioning</p>
<p>Simulation environments and offline evaluation</p>
<p>Automated testing of agent flows and guardrails</p>
<p>Controlled rollout, telemetry-driven optimization, and continuous learning</p>
<p>Build production-grade AI services primarily using Python, integrating:</p>
<p>LLM providers such as Anthropic (Claude), OpenAI, and open-source models</p>
<p>Retrieval-Augmented Generation (RAG) using vector databases (e.g., Pinecone, FAISS, Milvus, Weaviate)</p>
<p>Implement CI/CD pipelines for agent code, prompts, and policies.</p>
<p>Integrate GenAI agents with client systems via APIs, workflow engines, event streams, and data platforms.</p>
<p>Observability, Evaluation &amp; Optimization</p>
<p>Implement agent observability including tracing, decision logging, tool usage, and failure analysis.</p>
<p>Apply evaluation frameworks for hallucination detection, consistency checks, and fitness scoring.</p>
<p>Design feedback loops incorporating human-in-the-loop review and reinforcement.</p>
<p>Monitor cost, latency, throughput, and behavioral drift across deployed agents.</p>
<p>Governance, Risk &amp; Financial Services Compliance</p>
<p>Design Agentic AI platforms aligned with Financial Services regulatory expectations, including:</p>
<p>Auditability and traceability of agent decisions</p>
<p>Model and prompt explainability</p>
<p>Data privacy and security controls</p>
<p>Resilience and fail-safe mechanisms</p>
<p>Embed guardrails and policies addressing hallucination risk, bias, unauthorized actions, and escalation failures.</p>
<p>Produce documentation supporting risk, compliance, internal audit, and regulator engagement.</p>
<p><strong>Team Leadership &amp; Firm Contribution</strong></p>
<p>Provide technical leadership and mentorship to consulting delivery teams.</p>
<p>Contribute to internal GenAI accelerators, agent frameworks, and reusable assets.</p>
<p>Support RFPs, proposals, and client solution designs with credible GenAI and agentic architectures.</p>
<p>Participate in thought leadership on Agentic SDLC, GenAI engineering, and responsible autonomy.</p>
<p><strong>Benefits</strong></p>
<p>This position comes with competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
</ol>
<ol>
<li>Comprehensive benefits package</li>
</ol>
<ol>
<li>Career development and training opportunities</li>
</ol>
<ol>
<li>Flexible work arrangements (remote and/or office-based)</li>
</ol>
<ol>
<li>Dynamic and inclusive work culture within a globally known group</li>
</ol>
<ol>
<li>Private Health Insurance</li>
</ol>
<ol>
<li>Retirement Benefits</li>
</ol>
<ol>
<li>Paid Time Off</li>
</ol>
<ol>
<li>Training &amp; Development</li>
</ol>
<ol>
<li>Note: Benefits differ based on employee level</li>
</ol>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, GenAI, LLM, LangChain, LangGraph, DSPy, Vector Databases, APIs, Workflow Engines, Event Streams, Data Platforms, Agentic SDLC, Agent Design, Behavior Contracts, Prompt Policy, Tool Versioning, Simulation Environments, Offline Evaluation, Automated Testing, Controlled Rollout, Telemetry-Driven Optimization, Continuous Learning, Production-Grade AI Services, Retrieval-Augmented Generation, Human-in-the-Loop Review, Reinforcement, Cost Latency Throughput, Behavioral Drift, Auditability, Traceability, Model Explainability, Data Privacy, Security Controls, Resilience, Fail-Safe Mechanisms, Guardrails, Policies, Risk Compliance, Internal Audit, Regulator Engagement</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global leader in partnering with companies to transform and manage their business by harnessing the power of technology.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/dX77bfYLcJf1VCF2yXNUEe/hybrid-ai-product-engineer---agentic-ai-platforms-(financial-services)-in-mexico-city-at-capgemini</Applyto>
      <Location>Mexico City</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b699c631-37e</externalid>
      <Title>FBS Data Engineer-ETL (Informatica)</Title>
      <Description><![CDATA[<p>We are looking for a skilled Data Engineer to design, build, and maintain data pipelines that support analytics and business intelligence initiatives. This role involves both enhancing existing pipelines and developing new ones to integrate data from diverse internal and external sources.</p>
<p>The ideal candidate will have advanced SQL and Informatica skills, experience in ETL development, and a foundational understanding of dimensional data modeling. Experience with DBT is a plus.</p>
<p>Key responsibilities include designing, developing, and maintaining data pipelines and ETL workflows, enhancing and optimising existing data pipelines, building new data ingestion pipelines, and using Informatica to develop and manage ETL processes.</p>
<p>The successful candidate will have a bachelor&#39;s degree in Computer Science, Information Systems, or a related field, and 2-4 years of hands-on experience in data engineering or ETL development using Informatica.</p>
<p>They will also have advanced-level proficiency in writing, optimising, and troubleshooting SQL queries, intermediate experience building and managing pipelines using ETL platforms, and at least 3 years using Informatica for data integration tasks.</p>
<p>Excellent problem-solving and communication skills, with the ability to collaborate across teams, are essential for this role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Informatica, ETL development, dimensional data modeling, DBT, cloud data platforms, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>One of the United States&apos; largest insurers, providing a wide range of insurance and financial services products with gross written premiums well over US$25 Billion.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/pD4BtxSbTed3C7zp5tL7cF/remote-fbs-data-engineer-etl-(informatica)-in-brazil-at-capgemini</Applyto>
      <Location>Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f1a17e75-365</externalid>
      <Title>AI Product Engineer - Agentic AI Platforms (Financial Services)</Title>
      <Description><![CDATA[<p>Capgemini is at the forefront of Generative AI innovation, helping Financial Services clients industrialize GenAI and Agentic AI platforms at enterprise scale.</p>
<p>We are seeking an experienced and innovative AI Product Engineer – Agentic Platforms to join our Financial Services Artificial Intelligence &amp; Business Lines (FS-ABL) practice. This role is ideal for a consulting technologist with deep expertise in modern GenAI tooling, agentic system design, and enterprise SDLC, who can partner directly with clients to envision, design, develop, and deploy Agentic AI platforms in regulated environments.</p>
<p>In this role, you will work at the intersection of client advisory, AI product engineering, and delivery execution, helping banks, insurers, and capital markets firms transition from GenAI pilots to production-grade, governed, multi-agent systems. You will apply leading GenAI frameworks and LLM platforms , including Anthropic, OpenAI, LangChain, LangGraph, DSPy, and vector databases,while operating across the full Agentic SDLC.</p>
<p>P&amp;C Insurance knowledge and experience is a significant plus. Additionally, familiarity with core insurance platforms like Guidewire, DuckCreek or Majesco will be extremely helpful to succeed in this role.</p>
<p>We are looking for candidates across all levels of experience and expertise - junior through senior level AI Product Engineers.</p>
<p>Responsibilities:</p>
<p>Client Advisory &amp; Product Vision</p>
<p>Partner directly with Financial Services clients to identify, prioritize, and shape Agentic AI use cases across customer operations, underwriting, claims, risk, compliance, finance, and technology.</p>
<p>Lead client workshops to define agent personas, responsibilities, autonomy boundaries, human-in-the-loop checkpoints, and escalation logic.</p>
<p>Translate evolving business needs into agentic product backlogs, roadmaps, and MVP definitions.</p>
<p>Support executive conversations around GenAI platform strategy, operating models, vendor selection, and scale-out approaches.</p>
<p>Agentic Platform &amp; Architecture Design</p>
<p>Design and implement multi-agent architectures using modern GenAI tooling, including:</p>
<p>Planner, executor, reviewer/critic, and supervisor agents</p>
<p>Tool-calling and function-calling agents</p>
<p>Memory-enabled agents (conversation, semantic, episodic, and structured memory)</p>
<p>Leverage LangChain and LangGraph for agent orchestration, workflows, and control flow.</p>
<p>Apply DSPy and declarative prompt optimization techniques for repeatability, performance tuning, and regression control.</p>
<p>Design agent interaction patterns such as hierarchical agents, collaborating agents, and event-driven agent workflows.</p>
<p>Define standardized agent contracts, interfaces, and schemas to enable reuse and scale.</p>
<p>Agentic SDLC &amp; Engineering Delivery</p>
<p>Own delivery across the full Software Development Lifecycle (SDLC), extending it into a formal Agentic SDLC, including:</p>
<p>Agent design specifications and behavior contracts</p>
<p>Prompt, policy, and tool versioning</p>
<p>Simulation environments and offline evaluation</p>
<p>Automated testing of agent flows and guardrails</p>
<p>Controlled rollout, telemetry-driven optimization, and continuous learning</p>
<p>Build production-grade AI services primarily using Python, integrating:</p>
<p>LLM providers such as Anthropic (Claude), OpenAI, and open-source models</p>
<p>Retrieval-Augmented Generation (RAG) using vector databases (e.g., Pinecone, FAISS, Milvus, Weaviate)</p>
<p>Implement CI/CD pipelines for agent code, prompts, and policies.</p>
<p>Integrate GenAI agents with client systems via APIs, workflow engines, event streams, and data platforms.</p>
<p>Observability, Evaluation &amp; Optimization</p>
<p>Implement agent observability including tracing, decision logging, tool usage, and failure analysis.</p>
<p>Apply evaluation frameworks for hallucination detection, consistency checks, and fitness scoring.</p>
<p>Design feedback loops incorporating human-in-the-loop review and reinforcement.</p>
<p>Monitor cost, latency, throughput, and behavioral drift across deployed agents.</p>
<p>Governance, Risk &amp; Financial Services Compliance</p>
<p>Design Agentic AI platforms aligned with Financial Services regulatory expectations, including:</p>
<p>Auditability and traceability of agent decisions</p>
<p>Model and prompt explainability</p>
<p>Data privacy and security controls</p>
<p>Resilience and fail-safe mechanisms</p>
<p>Embed guardrails and policies addressing hallucination risk, bias, unauthorized actions, and escalation failures.</p>
<p>Produce documentation supporting risk, compliance, internal audit, and regulator engagement.</p>
<p>Team Leadership &amp; Firm Contribution</p>
<p>Provide technical leadership and mentorship to consulting delivery teams.</p>
<p>Contribute to internal GenAI accelerators, agent frameworks, and reusable assets.</p>
<p>Support RFPs, proposals, and client solution designs with credible GenAI and agentic architectures.</p>
<p>Participate in thought leadership on Agentic SDLC, GenAI engineering, and responsible autonomy.</p>
<p>Benefits</p>
<p>This position comes with competitive compensation and benefits package:</p>
<p>Competitive salary and performance-based bonuses</p>
<p>Comprehensive benefits package</p>
<p>Career development and training opportunities</p>
<p>Flexible work arrangements (remote and/or office-based)</p>
<p>Dynamic and inclusive work culture within a globally known group</p>
<p>Private Health Insurance</p>
<p>Retirement Benefits</p>
<p>Paid Time Off</p>
<p>Training &amp; Development</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, GenAI, LLM, LangChain, LangGraph, DSPy, Vector Databases, Pinecone, FAISS, Milvus, Weaviate, APIs, Workflow Engines, Event Streams, Data Platforms, Agentic AI, Financial Services, Regulatory Expectations, Auditability, Traceability, Model Explainability, Data Privacy, Security Controls, Resilience, Fail-Safe Mechanisms, Guardrails, Policies, Risk Management, Compliance, Internal Audit, Regulator Engagement</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global leader in partnering with companies to transform and manage their business by harnessing the power of technology.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/6SLkPnZzkzqnFXQSJGJZFt/hybrid-ai-product-engineer---agentic-ai-platforms-(financial-services)-in-chicago-at-capgemini</Applyto>
      <Location>Chicago</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9de4a206-807</externalid>
      <Title>Financial Services Digital Customer Experience Strategy Leader</Title>
      <Description><![CDATA[<p>Join Capgemini as a Financial Services Digital Customer Experience Strategy Leader, where you will spearhead the transformation of the customer experience for leading financial institutions. You will be responsible for devising and executing innovative digital strategies that enhance customer engagement and satisfaction across multiple channels.</p>
<p>Collaborating with cross-functional teams, you will leverage cutting-edge technologies and industry insights to deliver seamless, personalized customer journeys that drive business growth and loyalty.</p>
<p>This role leads North America Financial Services&#39; Digital Customer Experience (DCX) technology strategy and major transformation deals. The leader owns large pursuit strategy end-to-end,shaping solutions, developing value narratives, estimating, differentiating competitively, and guiding cross-functional teams,while engaging C-suite stakeholders to deliver outcomes in growth, experience, and efficiency.</p>
<p>Responsibilities include defining CX vision and maturity, designing journey transformations and operating models, and translating pain points into multi-year roadmaps. The role also sets enterprise CX technology strategy across CRM, marketing automation, case management, personalization, journey orchestration, and intelligent operations, ensuring scalable architectures and ROI. Finally, it drives thought leadership and partner ecosystem initiatives with key platforms and fintech/AI partners.</p>
<p>Key Responsibilities:</p>
<ol>
<li>Lead All Large Digital Customer Experience Deals</li>
</ol>
<ul>
<li>Serve as the executive deal lead for all large and strategic CX transformation pursuits across North America.</li>
<li>Own deal strategy, encompassing shaping, solutioning, storytelling, value articulation, estimation, and competitive differentiation.</li>
<li>Lead cross-functional pursuit teams (strategy, architecture, delivery, pricing, industry, partner ecosystem) to craft compelling proposals.</li>
<li>Engage directly with C-suite stakeholders to define outcomes tied to revenue growth, customer experience improvement, and operational efficiency.</li>
<li>Act as the primary executive representative and brand ambassador for all major DCX transformations.</li>
</ul>
<ol>
<li>Customer Experience Strategy and Consulting</li>
</ol>
<ul>
<li>Lead CX visioning, maturity assessments, journey transformation strategies, and future state operating model design.</li>
<li>Advise financial services leaders on unifying sales, service, marketing, and operations with modern digital, cloud, data, and AI platforms.</li>
<li>Translate customer pain points into multi-year, multi-platform transformation roadmaps.</li>
</ul>
<ol>
<li>Enterprise CX Technology Strategy</li>
</ol>
<ul>
<li>Define and articulate the overarching technology strategy for digital CX initiatives within the financial services industry, aligning with business objectives and customer-centric goals.</li>
<li>Develop enterprise technology solution strategies for CRM, marketing automation, case management, personalization, journey orchestration, and intelligent operations.</li>
<li>Work closely with solution architects to ensure that technology solutions across various stacks are cohesive, scalable, and effectively address customer needs and business requirements.</li>
<li>Guide clients on platform selection, modernization, integration, and maximizing ROI.</li>
</ul>
<ol>
<li>Customer-centric program planning</li>
</ol>
<ul>
<li>Focus intensely on customer goals, developing comprehensive program plans that drive measurable outcomes and enhance the overall customer experience.</li>
<li>Build program plans, value frameworks, governance structures, and executive reporting models for large-scale CX transformations.</li>
</ul>
<ol>
<li>Market and Thought Leadership</li>
</ol>
<ul>
<li>Create compelling thought leadership on the future of CX, AI-driven servicing, personalized banking, and connected customer journeys.</li>
<li>Present at industry forums and executive briefings, shaping brand perception in the market.</li>
<li>Develop frameworks, accelerators, and methodologies that differentiate our CX practice.</li>
</ul>
<ol>
<li>Partner Ecosystem Leadership</li>
</ol>
<ul>
<li>Leverage strategic relationships with Salesforce, Microsoft, Adobe, Pega, and key fintech/AI partners.</li>
<li>Shape co-innovation initiatives and joint go-to-market (GTM) strategies.</li>
<li>Stay ahead of platform roadmaps, competitive dynamics, and new capabilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and performance-based bonuses</Salaryrange>
      <Skills>CRM, sales transformation, contact center modernization, marketing automation, customer data platforms (CDPs), analytics, AI, workflow automation, customer operations, AI/ML, Generative AI (GenAI), automation applied to customer experience</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global leader in partnering with companies to transform and manage their business by harnessing the power of technology.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/dPtiiwfn7szw1AWk5wQZBr/hybrid-financial-services-digital-customer-experience-strategy-leader-in-charlotte-at-capgemini</Applyto>
      <Location>Charlotte</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>a89bcfe1-a38</externalid>
      <Title>Financial Services Digital Customer Experience Strategy Leader</Title>
      <Description><![CDATA[<p>Join Capgemini as a Financial Services Digital Customer Experience Strategy Leader, where you will spearhead the transformation of the customer experience for leading financial institutions. You will be responsible for devising and executing innovative digital strategies that enhance customer engagement and satisfaction across multiple channels.</p>
<p>Collaborating with cross-functional teams, you will leverage cutting-edge technologies and industry insights to deliver seamless, personalized customer journeys that drive business growth and loyalty.</p>
<p>This role leads North America Financial Services&#39; Digital Customer Experience (DCX) technology strategy and major transformation deals. The leader owns large pursuit strategy end-to-end,shaping solutions, developing value narratives, estimating, differentiating competitively, and guiding cross-functional teams,while engaging C-suite stakeholders to deliver outcomes in growth, experience, and efficiency.</p>
<p>Responsibilities include defining CX vision and maturity, designing journey transformations and operating models, and translating pain points into multi-year roadmaps. The role also sets enterprise CX technology strategy across CRM, marketing automation, case management, personalization, journey orchestration, and intelligent operations, ensuring scalable architectures and ROI. Finally, it drives thought leadership and partner ecosystem initiatives with key platforms and fintech/AI partners.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead All Large Digital Customer Experience Deals</li>
</ul>
<ul>
<li>Customer Experience Strategy and Consulting</li>
</ul>
<ul>
<li>Enterprise CX Technology Strategy</li>
</ul>
<ul>
<li>Customer-centric program planning</li>
</ul>
<ul>
<li>Market and Thought Leadership</li>
</ul>
<ul>
<li>Partner Ecosystem Leadership</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Education: Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, Business, Finance, Marketing, or a related field. Master&#39;s degree in MBA (strategy/finance/marketing), MS in Information Systems / Computer Science / Data &amp; Analytics, or similar.</li>
</ul>
<ul>
<li>Experience: 20+ years leading digital transformation programs in CX, CRM, customer service, marketing, or customer operations.</li>
</ul>
<ul>
<li>Strategic Leadership Skills: Executive-level presence and consultative influence. Ability to build and defend multi-year CX transformation strategies and business cases.</li>
</ul>
<ul>
<li>Technical and Domain Skills: Strong understanding of CRM and sales transformation, contact center modernization, marketing automation, customer data platforms (CDPs), analytics, and AI.</li>
</ul>
<ul>
<li>Preferred Qualifications: Experience in top-tier consulting or system integration firms. Strong understanding of AI/ML, Generative AI (GenAI), and automation applied to customer experience.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CRM, sales transformation, contact center modernization, marketing automation, customer data platforms (CDPs), analytics, AI, AI/ML, Generative AI (GenAI), automation</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global leader in partnering with companies to transform and manage their business by harnessing the power of technology.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/8C9j2emTit7S2qerdPeYeg/hybrid-financial-services-digital-customer-experience-strategy-leader-in-new-york-at-capgemini</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1d763315-ccf</externalid>
      <Title>Financial Services Digital Customer Experience Strategy Leader</Title>
      <Description><![CDATA[<p>Join Capgemini as a Financial Services Digital Customer Experience Strategy Leader, where you will spearhead the transformation of the customer experience for leading financial institutions. You will be responsible for devising and executing innovative digital strategies that enhance customer engagement and satisfaction across multiple channels.</p>
<p>Collaborating with cross-functional teams, you will leverage cutting-edge technologies and industry insights to deliver seamless, personalized customer journeys that drive business growth and loyalty.</p>
<p>This role leads North America Financial Services&#39; Digital Customer Experience (DCX) technology strategy and major transformation deals. The leader owns large pursuit strategy end-to-end,shaping solutions, developing value narratives, estimating, differentiating competitively, and guiding cross-functional teams,while engaging C-suite stakeholders to deliver outcomes in growth, experience, and efficiency.</p>
<p>Responsibilities include defining CX vision and maturity, designing journey transformations and operating models, and translating pain points into multi-year roadmaps. The role also sets enterprise CX technology strategy across CRM, marketing automation, case management, personalization, journey orchestration, and intelligent operations, ensuring scalable architectures and ROI. Finally, it drives thought leadership and partner ecosystem initiatives with key platforms and fintech/AI partners.</p>
<p>Key Responsibilities:</p>
<ol>
<li>Lead All Large Digital Customer Experience Deals</li>
</ol>
<ul>
<li>Serve as the executive deal lead for all large and strategic CX transformation pursuits across North America.</li>
<li>Own deal strategy, encompassing shaping, solutioning, storytelling, value articulation, estimation, and competitive differentiation.</li>
<li>Lead cross-functional pursuit teams (strategy, architecture, delivery, pricing, industry, partner ecosystem) to craft compelling proposals.</li>
<li>Engage directly with C-suite stakeholders to define outcomes tied to revenue growth, customer experience improvement, and operational efficiency.</li>
<li>Act as the primary executive representative and brand ambassador for all major DCX transformations.</li>
</ul>
<ol>
<li>Customer Experience Strategy and Consulting</li>
</ol>
<ul>
<li>Lead CX visioning, maturity assessments, journey transformation strategies, and future state operating model design.</li>
<li>Advise financial services leaders on unifying sales, service, marketing, and operations with modern digital, cloud, data, and AI platforms.</li>
<li>Translate customer pain points into multi-year, multi-platform transformation roadmaps.</li>
</ul>
<ol>
<li>Enterprise CX Technology Strategy</li>
</ol>
<ul>
<li>Define and articulate the overarching technology strategy for digital CX initiatives within the financial services industry, aligning with business objectives and customer-centric goals.</li>
<li>Develop enterprise technology solution strategies for CRM, marketing automation, case management, personalization, journey orchestration, and intelligent operations.</li>
<li>Work closely with solution architects to ensure that technology solutions across various stacks are cohesive, scalable, and effectively address customer needs and business requirements.</li>
<li>Guide clients on platform selection, modernization, integration, and maximizing ROI.</li>
</ul>
<ol>
<li>Customer-centric program planning</li>
</ol>
<ul>
<li>Focus intensely on customer goals, developing comprehensive program plans that drive measurable outcomes and enhance the overall customer experience.</li>
<li>Build program plans, value frameworks, governance structures, and executive reporting models for large-scale CX transformations.</li>
</ul>
<ol>
<li>Market and Thought Leadership</li>
</ol>
<ul>
<li>Create compelling thought leadership on the future of CX, AI-driven servicing, personalized banking, and connected customer journeys.</li>
<li>Present at industry forums and executive briefings, shaping brand perception in the market.</li>
<li>Develop frameworks, accelerators, and methodologies that differentiate our CX practice.</li>
</ul>
<ol>
<li>Partner Ecosystem Leadership</li>
</ol>
<ul>
<li>Leverage strategic relationships with Salesforce, Microsoft, Adobe, Pega, and key fintech/AI partners.</li>
<li>Shape co-innovation initiatives and joint go-to-market (GTM) strategies.</li>
<li>Stay ahead of platform roadmaps, competitive dynamics, and new capabilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and performance-based bonuses</Salaryrange>
      <Skills>CRM, sales transformation, contact center modernization, marketing automation, customer data platforms (CDPs), analytics, AI, workflow automation, customer operations, AI/ML, Generative AI (GenAI), automation applied to customer experience</Skills>
      <Category>Consulting</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global leader in partnering with companies to transform and manage their business by harnessing the power of technology.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/8iL3hPoTE3UYQ6jfJypxPT/hybrid-financial-services-digital-customer-experience-strategy-leader-in-atlanta-at-capgemini</Applyto>
      <Location>Atlanta</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f95cbaa8-c6f</externalid>
      <Title>AI Product Engineer - Agentic AI Platforms (Financial Services)</Title>
      <Description><![CDATA[<p>Capgemini is at the forefront of Generative AI innovation, helping Financial Services clients industrialize GenAI and Agentic AI platforms at enterprise scale.</p>
<p>We are seeking an experienced and innovative AI Product Engineer – Agentic Platforms to join our Financial Services Artificial Intelligence &amp; Business Lines (FS-ABL) practice. This role is ideal for a consulting technologist with deep expertise in modern GenAI tooling, agentic system design, and enterprise SDLC, who can partner directly with clients to envision, design, develop, and deploy Agentic AI platforms in regulated environments.</p>
<p>In this role, you will work at the intersection of client advisory, AI product engineering, and delivery execution, helping banks, insurers, and capital markets firms transition from GenAI pilots to production-grade, governed, multi-agent systems. You will apply leading GenAI frameworks and LLM platforms , including Anthropic, OpenAI, LangChain, LangGraph, DSPy, and vector databases,while operating across the full Agentic SDLC.</p>
<p>P&amp;C Insurance knowledge and experience is a significant plus. Additionally, familiarity with core insurance platforms like Guidewire, DuckCreek or Majesco will be extremely helpful to succeed in this role.</p>
<p>We are looking for candidates across all levels of experience and expertise - junior through senior level AI Product Engineers.</p>
<p>Responsibilities:</p>
<p>Client Advisory &amp; Product Vision</p>
<p>Partner directly with Financial Services clients to identify, prioritize, and shape Agentic AI use cases across customer operations, underwriting, claims, risk, compliance, finance, and technology.</p>
<p>Lead client workshops to define agent personas, responsibilities, autonomy boundaries, human-in-the-loop checkpoints, and escalation logic.</p>
<p>Translate evolving business needs into agentic product backlogs, roadmaps, and MVP definitions.</p>
<p>Support executive conversations around GenAI platform strategy, operating models, vendor selection, and scale-out approaches.</p>
<p>Agentic Platform &amp; Architecture Design</p>
<p>Design and implement multi-agent architectures using modern GenAI tooling, including:</p>
<p>Planner, executor, reviewer/critic, and supervisor agents</p>
<p>Tool-calling and function-calling agents</p>
<p>Memory-enabled agents (conversation, semantic, episodic, and structured memory)</p>
<p>Leverage LangChain and LangGraph for agent orchestration, workflows, and control flow.</p>
<p>Apply DSPy and declarative prompt optimization techniques for repeatability, performance tuning, and regression control.</p>
<p>Design agent interaction patterns such as hierarchical agents, collaborating agents, and event-driven agent workflows.</p>
<p>Define standardized agent contracts, interfaces, and schemas to enable reuse and scale.</p>
<p>Agentic SDLC &amp; Engineering Delivery</p>
<p>Own delivery across the full Software Development Lifecycle (SDLC), extending it into a formal Agentic SDLC, including:</p>
<p>Agent design specifications and behavior contracts</p>
<p>Prompt, policy, and tool versioning</p>
<p>Simulation environments and offline evaluation</p>
<p>Automated testing of agent flows and guardrails</p>
<p>Controlled rollout, telemetry-driven optimization, and continuous learning</p>
<p>Build production-grade AI services primarily using Python, integrating:</p>
<p>LLM providers such as Anthropic (Claude), OpenAI, and open-source models</p>
<p>Retrieval-Augmented Generation (RAG) using vector databases (e.g., Pinecone, FAISS, Milvus, Weaviate)</p>
<p>Implement CI/CD pipelines for agent code, prompts, and policies.</p>
<p>Integrate GenAI agents with client systems via APIs, workflow engines, event streams, and data platforms.</p>
<p>Observability, Evaluation &amp; Optimization</p>
<p>Implement agent observability including tracing, decision logging, tool usage, and failure analysis.</p>
<p>Apply evaluation frameworks for hallucination detection, consistency checks, and fitness scoring.</p>
<p>Design feedback loops incorporating human-in-the-loop review and reinforcement.</p>
<p>Monitor cost, latency, throughput, and behavioral drift across deployed agents.</p>
<p>Governance, Risk &amp; Financial Services Compliance</p>
<p>Design Agentic AI platforms aligned with Financial Services regulatory expectations, including:</p>
<p>Auditability and traceability of agent decisions</p>
<p>Model and prompt explainability</p>
<p>Data privacy and security controls</p>
<p>Resilience and fail-safe mechanisms</p>
<p>Embed guardrails and policies addressing hallucination risk, bias, unauthorized actions, and escalation failures.</p>
<p>Produce documentation supporting risk, compliance, internal audit, and regulator engagement.</p>
<p>Team Leadership &amp; Firm Contribution</p>
<p>Provide technical leadership and mentorship to consulting delivery teams.</p>
<p>Contribute to internal GenAI accelerators, agent frameworks, and reusable assets.</p>
<p>Support RFPs, proposals, and client solution designs with credible GenAI and agentic architectures.</p>
<p>Participate in thought leadership on Agentic SDLC, GenAI engineering, and responsible autonomy.</p>
<p>Benefits</p>
<p>This position comes with competitive compensation and benefits package:</p>
<p>Competitive salary and performance-based bonuses</p>
<p>Comprehensive benefits package</p>
<p>Career development and training opportunities</p>
<p>Flexible work arrangements (remote and/or office-based)</p>
<p>Dynamic and inclusive work culture within a globally known group</p>
<p>Private Health Insurance</p>
<p>Retirement Benefits</p>
<p>Paid Time Off</p>
<p>Training &amp; Development</p>
<p>Industry: Technology</p>
<p>Category: Engineering</p>
<p>Salary Range: Competitive salary and performance-based bonuses</p>
<p>Salary Min: 95000</p>
<p>Salary Max: 150000</p>
<p>Salary Currency: USD</p>
<p>Salary Period: year</p>
<p>Required Skills:</p>
<p>Generative AI</p>
<p>Agentic AI</p>
<p>Agent-based systems</p>
<p>Multi-agent architectures</p>
<p>LLM platforms</p>
<p>Vector databases</p>
<p>Python</p>
<p>CI/CD pipelines</p>
<p>APIs</p>
<p>Workflow engines</p>
<p>Event streams</p>
<p>Data platforms</p>
<p>Observability</p>
<p>Evaluation frameworks</p>
<p>Human-in-the-loop review</p>
<p>Reinforcement learning</p>
<p>Governance</p>
<p>Risk management</p>
<p>Financial Services compliance</p>
<p>Team leadership</p>
<p>Mentorship</p>
<p>Thought leadership</p>
<p>Preferred Skills:</p>
<p>Cloud computing</p>
<p>Containerization</p>
<p>DevOps</p>
<p>Machine learning</p>
<p>Natural language processing</p>
<p>Recommendation systems</p>
<p>deployed agents</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and performance-based bonuses</Salaryrange>
      <Skills>Generative AI, Agentic AI, Agent-based systems, Multi-agent architectures, LLM platforms, Vector databases, Python, CI/CD pipelines, APIs, Workflow engines, Event streams, Data platforms, Observability, Evaluation frameworks, Human-in-the-loop review, Reinforcement learning, Governance, Risk management, Financial Services compliance, Team leadership, Mentorship, Thought leadership, Cloud computing, Containerization, DevOps, Machine learning, Natural language processing, Recommendation systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/1YdZZ6Tw3ADgx3tiGVjczf/hybrid-ai-product-engineer---agentic-ai-platforms-(financial-services)-in-charlotte-at-capgemini</Applyto>
      <Location>Charlotte</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6c819191-3df</externalid>
      <Title>Data Architect</Title>
      <Description><![CDATA[<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You&#39;ll be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>
<p>The ideal candidate will have extensive experience in designing and implementing data architectures, with a strong understanding of database management, data modelling, and data governance. This role requires a strategic thinker with strong analytical and problem-solving skills and the ability to work collaboratively with clients and cross-functional teams.</p>
<p>As a Data Architect, you will design and implement robust, scalable, secure, and optimized data solutions that support business requirements and strategic goals. You will evaluate the client&#39;s existing data estate, diagnose underlying issues, and propose potential solutions. You will also collaborate with clients to understand their data needs and provide expert advice on data management and architecture.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement data models, data flow diagrams, and data dictionaries</li>
<li>Oversee the ingestion and integration of data from multiple sources into enterprise data platforms</li>
<li>Conduct data quality assessments and implement data governance processes and best practices</li>
<li>Stay updated with the latest trends and technologies in data architecture and management</li>
<li>Provide technical guidance and mentorship to data engineers and other team members</li>
<li>Identify and mitigate data-related risks throughout the project lifecycle</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Proven experience as a Data Architect, with 10+ years of experience in data architecture, database management, and data modelling</li>
<li>Strong knowledge of software development methodologies, tools, and frameworks, particularly Agile</li>
<li>Proficiency in both SQL and NOSQL database management systems (e.g. SQL Server/Oracle/MongoDB, CosmosDB, Snowflake, Databricks)</li>
<li>Hands-on experience with data modelling tools, data warehousing, ETL processes, and data integration techniques</li>
<li>Experience with at least one cloud data platform (e.g. AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark)</li>
</ul>
<p>Given that this is just a short snapshot of the role, we encourage you to apply even if you don&#39;t meet all the requirements listed above. We are looking for individuals who strive to make an impact and are eager to learn.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data architecture, database management, data modelling, data governance, SQL, NOSQL, Agile, cloud data platform, big data technologies</Skills>
      <Category>IT</Category>
      <Industry>Consulting</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/infosys.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting is a globally renowned management consulting firm that works with market leading brands across sectors. Its parent organization, Infosys, is a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://www.infosys.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/gxyLcoL5pitQCmJJzTECEv/remote-data-architect-in-poland-at-infosys-consulting---europe</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8e20eaf6-7f6</externalid>
      <Title>Data Operations, Associate</Title>
      <Description><![CDATA[<p>About this role</p>
<p>Own advanced operational support and stability for enterprise data platforms, acting as the primary L2/L3 interface for ETL/ELT pipelines, orchestration, observability, and Snowflake workloads. The role bridges execution and engineering, with accountability for incident resolution, platform reliability, and operational improvement.</p>
<p>Key Responsibilities</p>
<ul>
<li>Own L1/L2 operational support for production data platforms, including data lakes, streaming pipelines, and Snowflake-based analytics.</li>
<li>Diagnose and resolve complex failures in ETL/ELT pipelines and orchestration frameworks, partnering with engineering where required.</li>
<li>Actively manage incidents, including impact assessment, remediation coordination, and post incident documentation.</li>
<li>Improve monitoring, alerting, and observability coverage, identifying gaps and driving instrumentation enhancements.</li>
<li>Support onboarding of new pipelines and data products by validating operational readiness, scalability, and reliability.</li>
<li>Analyze recurring incidents and data quality issues, contributing to root cause analysis (RCA) and long-term remediation.</li>
<li>Mentor analysts through guidance on operational best practices, troubleshooting, and platform behavior.</li>
<li>Contribute to automation initiatives to reduce manual effort and improve operational efficiency.</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>enterprise data platforms, ETL/ELT pipelines, orchestration, observability, Snowflake workloads, AWS, Azure, GCP, cloud-native data services, monitoring, alerting, observability systems</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management corporation that provides a range of investment management services to institutional and retail clients.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/v9REx3w1EEK7y2df1zPkqK/data-operations%2C-associate-in-edinburgh-at-blackrock</Applyto>
      <Location>Edinburgh</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>94fdb80f-cee</externalid>
      <Title>Cloud Data Engineer</Title>
      <Description><![CDATA[<p>Part of The Brandtech Group, fifty-five is a data consultancy helping brands collect, analyse and activate their data across paid, earned and owned channels to increase their marketing ROI and improve customer experience.</p>
<p>As part of the company&#39;s continued expansion into cloud services in APAC, we are hiring a Cloud Data Engineer to join our Taipei team.</p>
<p>This person will work closely with our clients in Taiwan and the region, collaborating with both our local and global engineering teams.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement data architectures and pipelines for cloud and digital analytics projects on cloud platforms</li>
<li>Deliver hands-on technical services including cloud migration, data transformation, data warehousing, visualization, and advanced analytics</li>
<li>Set up CI/CD pipelines and deployment workflows to ensure proper integration of cloud infrastructure and data pipelines</li>
<li>Streamline and automate processes to optimize performance and cost-efficiency for digital analytics platforms</li>
<li>Support pre-sales activities with local consultants (e.g. demo development, RFP contribution, technical solutioning)</li>
<li>Collaborate with Global Engineering team to develop and deliver POCs for cloud and data-related use cases</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>University degree in Computer Science, Information Systems, or related disciplines</li>
<li>Minimum 1 year of experience with cloud data platforms (GCP preferred; AWS or Azure also welcome)</li>
<li>Familiar with data engineering concepts and tools (e.g. BigQuery, Dataflow, Pub/Sub, Airflow, etc.)</li>
<li>Proficient in one or more programming languages (e.g. Python, Java)</li>
<li>Knowledge of API design, microservices, and DevOps practices (CI/CD, version control, containerization)</li>
<li>Good understanding of data analytics, data warehousing, and visualization (e.g. Looker, Data Studio, Tableau)</li>
<li>Experience with website or mobile app tracking implementation is a plus</li>
<li>Professional cloud certification (GCP, AWS, or Azure) is a plus</li>
<li>Able to communicate technical concepts clearly to non-technical stakeholders</li>
<li>Strong problem-solving skills, self-driven, and collaborative</li>
<li>Fluent in English and Mandarin Chinese</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Exposure to cloud automation, marketing platforms, and media data analytics projects</li>
<li>Opportunity to work with our global consulting and engineering teams to engage our clients from diverse industries around the world</li>
<li>20 days Annual Leave</li>
<li>Work remotely (Maximum 2 days a week Work From Home policy)</li>
<li>Regular team activities including TGIF, team lunch and Off-site!</li>
<li>A multicultural environment with employees from over 20 countries</li>
<li>Values centered on excellence, caring and sharing</li>
<li>Continuous (and certified) training on the digital ecosystem and technologies (initial training for all new employees, followed by ongoing training sessions, etc.)</li>
<li>Particular importance given to work-life balance and the right to disconnect</li>
<li>Work-life balance and strong support for well-being</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud data platforms, BigQuery, Dataflow, Pub/Sub, Airflow, Python, Java, API design, microservices, DevOps practices, data analytics, data warehousing, visualization, website or mobile app tracking implementation, professional cloud certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>fifty-five</Employername>
      <Employerlogo>https://logos.yubhub.co/fifty-five.com.png</Employerlogo>
      <Employerdescription>fifty-five is a data consultancy helping brands collect, analyse and activate their data across paid, earned and owned channels to increase their marketing ROI and improve customer experience. It has over 300 employees globally.</Employerdescription>
      <Employerwebsite>https://www.fifty-five.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/txdbDrb5JndzD9ytk688xh/hybrid-cloud-data-engineer---taiwan-in-taipei-at-fifty-five</Applyto>
      <Location>Taipei</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>d5144f70-077</externalid>
      <Title>Account Executive - Enterprise</Title>
      <Description><![CDATA[<p>About Mistral At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work. We are a team passionate about AI and its potential to transform society.</p>
<p>Role Summary: As an Enterprise Account Executive, you will play a pivotal role in driving Mistral&#39;s adoption among our most strategic enterprise customers. We are looking for highly action-oriented individuals who thrive in fast-paced environments, love making deals, and have a strong bias for execution. While traditional enterprise sales experience is valuable, we are particularly interested in candidates with strategy consulting backgrounds or Investment Banking who are excited to transition into a high-impact commercial role. You will own the entire sales cycle, from prospecting and first introductions to closing complex enterprise agreements, working closely with our technical, implementation, and legal teams to deliver exceptional value to our customers. Success in this role means combining strategic insight, strong executive engagement, and disciplined deal execution to turn opportunities into long-term partnerships.</p>
<p>Responsibilities: Lead Development (Strategic Outbound &amp; Qualified Inbound) - Drive strategic outreach &amp; leverage introductions to engage high-potential enterprise customers - Convert inbound opportunities into high-value partnerships, including upsells and bespoke enterprise agreements - Build and maintain a strong pipeline of qualified opportunities Value Proposition Validation - Support enterprise customers through Proof of Concept (POC) phases, ensuring a smooth and impactful evaluation process - Translate successful evaluations into long-term production contracts by demonstrating clear ROI and business impact - Align Mistral&#39;s capabilities with customer strategic priorities Deal Management &amp; Closing - Develop and execute strategic sales plans to convert leads into long-term customers - Act as the primary point of contact for external stakeholders throughout the entire sales cycle - Lead negotiations and collaborate with legal, technical, and implementation teams to finalize agreements Executive Engagement - Build strong relationships with C-level executives, innovation leaders, and senior decision-makers - Understand customer strategic priorities and position Mistral&#39;s AI capabilities as a critical enabler of their initiatives - Guide executive stakeholders through complex technology adoption decisions Technical Collaboration - Develop a strong understanding of Mistral&#39;s AI platform and technical capabilities - Work closely with implementation and engineering teams to address technical questions and ensure successful deployments Training &amp; Enablement - Share customer insights internally to inform product development and strategy - Help internal teams better understand enterprise customer needs and market opportunities</p>
<p>Who you are: Consulting or Strategic Background - Experience in strategy consulting (e.g., McKinsey, BCG, Bain, or similar firms) - Strong ability to structure complex problems and translate strategy into action Commercial Mindset - Highly action-oriented with a strong bias toward execution - Passion for deal-making and turning opportunities into closed agreements - Ability to operate in fast-paced and ambiguous environments Executive Presence - Comfortable engaging and influencing C-level executives - Strong communication and storytelling skills Nice to Have - Experience working on large transformation projects (AI, infrastructure, telecom, energy, or digital transformation) - Familiarity with AI, data platforms, or enterprise software ecosystems - Ideally had a first role in sales; business development, deal closing roles</p>
<p>What We Offer: Competitive cash salary and equity Food: Daily lunch vouchers Sport: Monthly contribution to a Gym pass subscription Transportation: Monthly contribution to a mobility pass Health: Full health insurance for you and your family Parental: Generous parental leave policy Visa sponsorship</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI, data platforms, enterprise software ecosystems, strategy consulting, Investment Banking, commercial mindset, deal-making, complex problem-solving, execution, fast-paced environments, ambiguous environments, C-level executives, communication, storytelling</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI provides high-performance, optimized, open-source and cutting-edge AI models, products and solutions for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/2a357282-9d44-4b41-a249-c75ffe878ce2</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5973e28c-ce1</externalid>
      <Title>Data Analyst - Physical Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a Data Analyst to join xAI&#39;s Infrastructure team responsible for building and operating world-class datacenters and power generation facilities. In this role, you will focus on analysing power and cooling performance data, developing forecasts for utility consumption and costs, and delivering data-driven insights to optimise our rapidly expanding physical infrastructure for AI supercomputing.</p>
<p>Responsibilities:</p>
<ul>
<li>Collect, clean, integrate, and analyse high-volume power, cooling, and energy usage data from datacenter facilities and power plants</li>
<li>Build and refine forecasting models for electricity, water, and other utility consumption to support budgeting, planning, and procurement</li>
<li>Design, develop, and maintain interactive business intelligence dashboards and reports using tools such as Seeq, Tableau, Power BI, Looker, or similar</li>
<li>Identify trends, anomalies, inefficiencies, and optimisation opportunities in power distribution and cooling systems</li>
<li>Partner with mechanical, electrical, and facilities engineering teams to translate analytical findings into engineering and operational improvements</li>
<li>Support infrastructure expansion planning through scenario analysis, capacity modelling, and cost projections</li>
<li>Automate data collection pipelines and reporting processes to enable real-time visibility and decision making</li>
<li>Present clear, actionable insights and recommendations to cross-functional teams and leadership</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>4+ years of professional experience in data analysis, business intelligence, or analytics engineering</li>
<li>Strong SQL skills and proficiency in Python (pandas, scikit-learn, or similar) or R for data analysis and modelling</li>
<li>Hands-on experience building dashboards and visualisations with Tableau, Power BI, Looker, or equivalent</li>
<li>Solid foundation in statistics, time-series analysis, and forecasting techniques</li>
<li>Experience working with large datasets and building scalable reporting solutions</li>
<li>Excellent written and verbal communication skills</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Background in energy, utilities, datacenters, critical infrastructure, or industrial facilities</li>
<li>Familiarity with SCADA systems, building management systems (BMS), or IoT sensor data</li>
<li>Experience with cloud data platforms (Snowflake, BigQuery, AWS/GCP/Azure data services)</li>
<li>Knowledge of power systems, HVAC/cooling efficiency metrics (PUE, WUE, etc.), or energy modelling</li>
<li>Advanced degree in Data Science, Statistics, Engineering, Operations Research, or related quantitative field</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Tableau, Power BI, Looker, Statistics, Time-series analysis, Forecasting, Data analysis, Business intelligence, Analytics engineering, Energy, Utilities, Datacenters, Critical infrastructure, Industrial facilities, SCADA systems, Building management systems, IoT sensor data, Cloud data platforms, Power systems, HVAC/cooling efficiency metrics, Energy modelling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The organisation operates with a flat structure and has a small, highly motivated team.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5112514007</Applyto>
      <Location>Memphis, TN</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9eb594a6-97b</externalid>
      <Title>Product Manager 3</Title>
      <Description><![CDATA[<p>Join the team as our next Data Platform Product Manager in the Data Governance and Insights team.</p>
<p>This position is needed to drive Data Insights and Twilio&#39;s Data Governance initiatives across Twilio. This position is based in India. You will touch many teams within Twilio to ensure safe customer data handling, supporting data privacy and compliance. This team manages data pipeline security, data reliability, and ensuring access controls. We are also the bridge to the reporting systems trusted by customers, executives and shareholders.</p>
<p>In this role, you’ll:</p>
<ul>
<li>Champion customer-facing product development that will reduce time to insights.</li>
<li>Own the cradle to grave product lifecycle for insights platforms.</li>
<li>Understand the needs of our end customers in the global communications market and build a platform to help internal teams manage and leverage their data to derive meaningful insights.</li>
<li>Support Data Governance initiative for data pipelines and insights products, working with product managers and engineering counterparts across various organizations and stakeholders.</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, customer engagement platforms, streaming applications, Kafka, ElasticSearch, Clickhouse, Spark, Presto/Athena, cloud, APIs, communications, enterprise software, data reliability, ETL techniques, collaborative approach, ability to work with distributed, cross-functional teams, great communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7424471</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7d57ab2d-f3b</externalid>
      <Title>Cloud Solution Architect</Title>
      <Description><![CDATA[<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow&#39;s transportation.</p>
<p>If you&#39;re looking for the chance to leverage advanced technology to redefine the transportation landscape, enhance the customer experience, and improve people&#39;s lives: this is the opportunity for you. Join us and challenge your IT expertise and analytical skills to help create vehicles that are as smart as you are.</p>
<p>To meet the growing needs of the Customer analytics business, the team is looking for a self-motivated, technically proficient individual to craft and shepherd coherent solutions. This will require collaboration with a range of stakeholders to clarify requirements, establish pragmatic approaches, and support and articulate decisions over time. You will join a cloud architecture team that works closely with engineering teams and other architects across the organisation.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Technical Requirements</strong></p>
<ul>
<li>Extensive experience with Google Cloud Platform (GCP), specifically BigQuery, Vertex AI, Dataflow, Dataproc, Cloud Run, CloudSQL, Spanner and Apigee.</li>
</ul>
<ul>
<li>Security &amp; Networking: Strong understanding of cloud security protocols, IAM, encryption, and complex network topologies.</li>
</ul>
<ul>
<li>Data Management: Proficiency in Enterprise Data Platforms, Data mesh architecture and data-driven architectural patterns.</li>
</ul>
<ul>
<li>DevOps Tooling: Hands-on experience with GitHub, SonarQube, Checkmarx, and FOSSA.</li>
</ul>
<ul>
<li>Software Engineering: Strong background in building Web Services and maintaining Clean Code standards.</li>
</ul>
<p><strong>Technical Leadership &amp; Strategy</strong></p>
<ul>
<li>System Design: Work with engineering teams to refine system designs, evangelising for horizontal scalability, resilience, and Clean Code compliance.</li>
</ul>
<ul>
<li>Product Collaboration: Partner with Product Managers to decompose complex business needs into incremental, production-ready user stories within an Agile/Sprint methodology.</li>
</ul>
<ul>
<li>Architectural Governance: Assess and document the rationale and tradeoffs for technical decisions; contribute to the broader Cloud Architecture team to improve global practices.</li>
</ul>
<ul>
<li>DevOps Excellence: Utilise and improve CI/CD pipelines using GitHub and automated testing/security tools to maximise deployment efficiency and minimise risk.</li>
</ul>
<p><strong>Cloud, Networking &amp; Security</strong></p>
<ul>
<li>Secure Infrastructure: Serve as the primary architect for cloud solutions, ensuring &#39;Secure-by-Design&#39; principles are applied across Google Cloud services (Dataflow, Dataproc, CloudRun, CloudSQL, Spanner).</li>
</ul>
<ul>
<li>Advanced Networking: Design and optimise cloud networking configurations, including VPCs, Service Controls, Load Balancing, and Private Service Connect to ensure high availability and low latency.</li>
</ul>
<ul>
<li>Cyber Security Oversight: Integrate security scanning and compliance into the architecture (utilising Checkmarx, SonarQube, and FOSSA). Proactively address vulnerabilities in distributed systems and AI models (e.g., OWASP Top 10 for LLMs).</li>
</ul>
<ul>
<li>API &amp; Data Contracts: Bolster &#39;Data as a Product&#39; practices by enforcing strict API standards and data contracts to ensure seamless, secure interoperability between services.</li>
</ul>
<ul>
<li>FinOps &amp; Cost Optimisation: Drive fiscal responsibility by right-sizing GCP resources and optimising Generative AI architectures (token management/model selection) to maximise ROI.</li>
</ul>
<ul>
<li>SRE &amp; Performance Tuning: Apply Site Reliability Engineering principles to ensure high availability, minimise system latency, and lead root-cause analysis for complex, distributed system failures.</li>
</ul>
<ul>
<li>DevSecOps &amp; Problem Solving: Integrate security automation into CI/CD pipelines to ensure &#39;Secure-by-Design&#39; deployments while solving complex architectural trade-offs between speed, scale, and risk.</li>
</ul>
<ul>
<li>Continuous Learning: Stay at the forefront of AI research, specifically regarding autonomous agents, prompt engineering etc</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>AI development tools and frameworks (e.g., LangChain, LangGraph, or Agent Dev Kit) to accelerate the delivery of intelligent applications.</li>
</ul>
<ul>
<li>Agentic &amp; GenAI Design: Lead the architectural design of Agentic AI systems (multi-agent orchestration) and Generative AI solutions, including Retrieval-Augmented Generation (RAG) patterns and LLM integration.</li>
</ul>
<ul>
<li>Kubernetes (GKE): Experience managing containerised workloads at scale.</li>
</ul>
<ul>
<li>Kafka/Event-Driven Design: Experience with high-throughput messaging and event-driven architectures.</li>
</ul>
<ul>
<li>MLOps: Familiarity with the end-to-end lifecycle of machine learning models in production.</li>
</ul>
<p><strong>Qualifications</strong></p>
<p><strong>You&#39;ll have...</strong></p>
<ul>
<li>Requires a bachelor&#39;s or foreign equivalent degree in computer science, information technology or a technology related field</li>
</ul>
<ul>
<li>5+ years of Software engineering experience using Java or Python developing services (APIs, REST, etc.)</li>
</ul>
<ul>
<li>2+ years of experience with Google Cloud Platform or other cloud service provider (AWS, Azure, etc.) and associated cloud components.</li>
</ul>
<ul>
<li>Experience designing/architecting and running distributed systems in a production environment</li>
</ul>
<ul>
<li>STRONG communications skills and cognitive agility - ability to engage in deep technical discussions with customers and peers, become a trusted technical advisor, and maintain good documentation</li>
</ul>
<p><strong>Even better, you may have...</strong></p>
<ul>
<li>Master&#39;s degree in computer science, electrical engineering or a closely related field of study</li>
</ul>
<ul>
<li>Familiarity with a breadth of programming languages, platforms, and systems</li>
</ul>
<ul>
<li>Experience with asynchronous messaging and eventually consistent system design</li>
</ul>
<ul>
<li>An agile, pragmatic, and empirical mindset</li>
</ul>
<ul>
<li>Critical thinking, decision-making and leadership aptitudes</li>
</ul>
<ul>
<li>Good organisational and problem-solving abilities</li>
</ul>
<ul>
<li>MDM, Entity Resolution, Customer Analytics and Marketing Analytics experience is a huge plus.</li>
</ul>
<p>You may not check every box, or your experience may look a little different from what we&#39;ve outlined, but if you think you can bring value to Ford Motor Company, we encourage you to apply!</p>
<p><strong>As an established global company, we offer the benefit of choice. You can choose what your Ford future will look like: will your story span the globe, or keep you close to home? Will your career be a deep dive into what you love, or a series of new teams and new skills? Will you be a leader, a changemaker, a technical expert, a culture builder…or all of the above? No matter what you choose, we offer a work life that works for you, including:</strong></p>
<ul>
<li>Immediate medical, dental, and prescription drug coverage</li>
</ul>
<ul>
<li>Flexible family care, parental leave, new parent ramp-up programs, subsidised back-up child care and more</li>
</ul>
<ul>
<li>Vehicle discount programme for employees and family members, and management leases</li>
</ul>
<ul>
<li>Tuition assistance</li>
</ul>
<ul>
<li>Established and active employee resource groups</li>
</ul>
<ul>
<li>Paid time off for individual and team community service</li>
</ul>
<ul>
<li>A generous schedule of paid holidays, including the week between Christmas and New Year&#39;s Day</li>
</ul>
<ul>
<li>Paid time off and the option to purchase additional vacation time.</li>
</ul>
<p><strong>For a detailed look at our benefits, click here:</strong> Benefit Summary</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$115,000-$192,900</Salaryrange>
      <Skills>Google Cloud Platform, BigQuery, Vertex AI, Dataflow, Dataproc, Cloud Run, CloudSQL, Spanner, Apigee, Security &amp; Networking, IAM, Encryption, Complex Network Topologies, Data Management, Enterprise Data Platforms, Data Mesh Architecture, Data-Driven Architectural Patterns, DevOps Tooling, GitHub, SonarQube, Checkmarx, FOSSA, Software Engineering, Web Services, Clean Code Standards, System Design, Horizontal Scalability, Resilience, Clean Code Compliance, Product Collaboration, Agile/Sprint Methodology, Architectural Governance, Cloud Architecture, DevOps Excellence, CI/CD Pipelines, Automated Testing/Security Tools, Secure Infrastructure, Secure-by-Design Principles, Cloud Services, Advanced Networking, VPCs, Service Controls, Load Balancing, Private Service Connect, Cyber Security Oversight, Security Scanning, Compliance, Distributed Systems, AI Models, API &amp; Data Contracts, Data as a Product, API Standards, Data Contracts, Seamless Interoperability, FinOps &amp; Cost Optimisation, Fiscal Responsibility, GCP Resources, Generative AI Architectures, Token Management, Model Selection, ROI Maximisation, SRE &amp; Performance Tuning, High Availability, System Latency, Root-Cause Analysis, DevSecOps &amp; Problem Solving, Security Automation, Continuous Learning, AI Research, Autonomous Agents, Prompt Engineering, Kubernetes, Containerised Workloads, Kafka/Event-Driven Design, High-Throughput Messaging, Event-Driven Architectures, MLOps, Machine Learning Models, End-to-End Lifecycle, AI Development Tools, Frameworks, LangChain, LangGraph, Agent Dev Kit, Agentic &amp; GenAI Design, Multi-Agent Orchestration, Generative AI Solutions, Retrieval-Augmented Generation, LLM Integration, Kubernetes (GKE)</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo>https://logos.yubhub.co/corporate.ford.com.png</Employerlogo>
      <Employerdescription>Ford Motor Company is a multinational automaker headquartered in Dearborn, Michigan. It is one of the largest automobile manufacturers in the world.</Employerdescription>
      <Employerwebsite>https://corporate.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62370</Applyto>
      <Location>Dearborn</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>d4a85662-5ed</externalid>
      <Title>Enterprise Product Manager - AI Solutions</Title>
      <Description><![CDATA[<p>As an Enterprise Product Manager, AI Solutions, you will identify, shape, and scale AI-enabled products and workflows for core enterprise functions, with an initial emphasis on Finance. You will partner with Finance, Enterprise Platforms Engineering, People and other enterprise teams to translate operational pain points into practical AI solutions, from copilots and agentic workflows to data products and system automations.</p>
<p>This is an individual contributor role with broad cross-functional leadership: you will define the roadmap, prioritize high-impact use cases, run structured experiments, drive delivery across technical and non-technical teams, and measure adoption and business value. You will help turn fragmented tools, data, and processes into cohesive AI solutions that are secure, reliable, and usable in the real world.</p>
<p>You will work hands-on with leading ERP, HCM, CRM, planning, ticketing and data platforms (e.g., Oracle, Workday, Salesforce, Databricks, etc.), along with our own internal products and platforms, ensuring solutions are grounded in the realities of data quality, controls, governance, and operational ownership.</p>
<p>We’re looking for people who can move fluidly from understanding business process needs to defining product requirements and collaborating with internal product teams to improve solutions for our users.</p>
<p>In this role, you will:</p>
<ul>
<li>Identify and prioritize AI-driven opportunities to automate finance operations and adjacent enterprise workflows, enabling more efficient processes, better insights, and stronger execution.</li>
<li>Partner with stakeholders to understand current-state processes, pain points, data dependencies, risk constraints, and success metrics, then translate those into clear product requirements and prioritized roadmaps.</li>
<li>Define product requirements and develop lightweight prototypes to help guide engineering teams in building solutions that meet user needs.</li>
<li>Design and lead pilots for AI-enabled workflows, including agentic tools, workflow automation, and custom applications, with clear hypotheses, rollout criteria, and measurable outcomes.</li>
<li>Collaborate with Enterprise Platform Engineering and internal platform teams to adopt OpenAI technologies, such as Codex, as well as MCP-based actions and connectors that securely expose enterprise system capabilities to AI workflows.</li>
<li>Coordinate secure data onboarding and integration across systems such as Oracle, Workday, Salesforce, Anaplan, Databricks, and external vendor platforms, partnering with data owners, IT, Security, Legal, and Risk as needed.</li>
<li>Own delivery across technically and organizationally complex initiatives, aligning requirements, dependencies, and governance reviews, so teams can move from experimentation to scaled adoption.</li>
<li>Track adoption, quality, and business impact through dashboards, user feedback, and executive-ready updates, and use those signals to iterate on product direction and investment priorities.</li>
<li>Assist with other development efforts as needed.</li>
</ul>
<p>You might thrive in this role if you:</p>
<ul>
<li>Have 8+ years of experience across product management, enterprise applications, business systems, or enterprise transformation, with a track record of driving technology-enabled business outcomes in complex environments.</li>
<li>Bring strong fluency in Finance processes and enterprise technology, with experience across areas such as quote-to-cash, revenue, billing, accounting, treasury, FP&amp;A, or adjacent finance domains.</li>
<li>Understand how enterprise platforms such as Oracle Fusion, Workday, Salesforce, Databricks, Anaplan, or similar systems fit together to support core business operations and data flows.</li>
<li>Have led the delivery of production AI, automation, or data products and naturally think about governance, failure modes, usability, and operational adoption.</li>
<li>Can structure ambiguous business problems, evaluate competing opportunities, and turn them into pragmatic roadmaps, pilot plans, and scaled implementations.</li>
<li>Are comfortable working across APIs, integrations, identity, data access patterns, and workflow orchestration, and can partner effectively with engineers, architects, and data teams.</li>
<li>Have strong judgment in enterprise environments and can challenge assumptions, identify product gaps, and provide actionable feedback to improve internal tools and platforms.</li>
<li>Are an exceptional communicator who can document decisions crisply, influence without authority, and present status, risks, and recommendations clearly to both executives and practitioners.</li>
<li>Can lead cross-functionally without relying on formal org boundaries: you build trust, create momentum, and raise the quality bar through clarity, judgment, and follow-through.</li>
</ul>
<p>About OpenAI</p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$260K – $288K</Salaryrange>
      <Skills>Product Management, Enterprise Applications, Business Systems, Enterprise Transformation, Finance Processes, Enterprise Technology, ERP, HCM, CRM, Planning, Ticketing, Data Platforms, Oracle, Workday, Salesforce, Databricks, APIs, Integrations, Identity, Data Access Patterns, Workflow Orchestration, Governance, Failure Modes, Usability, Operational Adoption</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/fca0f787-f3c5-4528-bb20-803dba07501a</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>712a2d3f-234</externalid>
      <Title>Senior Legal Solutions Architect</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>We offer a competitive salary range of $216K – $240K, including generous equity, performance-related bonus(es) for eligible employees, and the following benefits:</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Role</strong></p>
<p>We are hiring a Senior Legal Solutions Architect to design, build, and scale the AI-native systems that power OpenAI’s Legal function. This role sits at the intersection of legal operations, enterprise architecture, business intelligence, and applied AI, and is responsible for architecting workflows that combine traditional legal systems (CLM, OCM, case management, intake) with agentic, model-driven automation using OpenAI’s API and agent builder platform.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>AI-Native &amp; Agentic Workflow Design</strong></p>
<ul>
<li>Design and implement agentic legal workflows incorporating multi-step reasoning, tool-calling, orchestration, and human-in-the-loop review using OpenAI models and APIs.</li>
</ul>
<ul>
<li>Build systems where agents can:</li>
</ul>
<ul>
<li>Triage and route legal intake.</li>
</ul>
<ul>
<li>Extract, normalize, and reason over contract, matter, and billing data.</li>
</ul>
<ul>
<li>Apply playbooks, flag deviations, and escalate issues.</li>
</ul>
<ul>
<li>Interact with downstream systems and data platforms in a controlled, auditable way.</li>
</ul>
<ul>
<li>Define guardrails around autonomy, review thresholds, and escalation paths.</li>
</ul>
<p><strong>Legal Systems &amp; Data Architecture</strong></p>
<ul>
<li>Serve as the primary architect and steward of the legal technology stack, including:</li>
</ul>
<ul>
<li>CLM, OCM, and intake systems.</li>
</ul>
<ul>
<li>Workflow orchestration and middleware.</li>
</ul>
<ul>
<li>AI/agent services using OpenAI models, APIs, and agent builder platforms</li>
</ul>
<ul>
<li>Data platforms.</li>
</ul>
<ul>
<li>Design data flows that ensure legal data is:</li>
</ul>
<ul>
<li>Structured and queryable.</li>
</ul>
<ul>
<li>Governed for privilege and access.</li>
</ul>
<ul>
<li>Suitable for analytics and AI-driven workflows.</li>
</ul>
<p><strong>Legal Analytics Enablement</strong></p>
<ul>
<li>Design and oversee data flows that ensure legal data (contracts, matters, requests, invoices, workflow events) from core systems is structured, reportable, and ready for analytics and AI use cases.</li>
</ul>
<ul>
<li>Support AI and agentic use cases that rely on curated datasets, embeddings, and historical context.</li>
</ul>
<ul>
<li>Ensure data quality, lineage, and auditability across systems.</li>
</ul>
<p><strong>Integrations, APIs &amp; Middleware</strong></p>
<ul>
<li>Configure, extend, support, and in some cases build API-based integrations, webhooks and middleware connectors across legal systems, data platforms, and enterprise tools.</li>
</ul>
<p><strong>You’ll Enjoy This Role If You:</strong></p>
<ul>
<li>7+ years of experience in legal engineering, solutions architecture, or complex enterprise systems integration.</li>
</ul>
<ul>
<li>Strong hands-on experience with API integration and middleware (REST APIs, JSON, webhooks, auth, error handling, observability).</li>
</ul>
<ul>
<li>Comfort with light scripting or automation (e.g., Python, SQL, or similar) for building automation, integrations, and backend services.</li>
</ul>
<ul>
<li>Deep experience with CLM systems in a complex legal environment.</li>
</ul>
<ul>
<li>Experience designing and scaling workflows using tools like Tonkean or comparable orchestration platforms.</li>
</ul>
<ul>
<li>Demonstrated ability to translate ambiguous legal requirements into reliable technical systems.</li>
</ul>
<ul>
<li>Strong systems thinking around reliability, security, permissions, and data integrity.</li>
</ul>
<ul>
<li>Hands-on experience building with OpenAI APIs (or similar LLM platforms), including tool-calling and multi-step workflows.</li>
</ul>
<ul>
<li>Experience designing agentic systems with human-in-the-loop review and safety constraints.</li>
</ul>
<ul>
<li>Experience integrating legal systems with ticketing, orchestration, and data/BI platforms.</li>
</ul>
<ul>
<li>Strong technical documentation and architectural communication skills.</li>
</ul>
<p><strong>What Success Looks Like:</strong></p>
<ul>
<li>Legal workflows are faster, more scalable, and more resilient through a mix of automation, agents, and human review.</li>
</ul>
<ul>
<li>AI-powered systems are deployed responsibly, with clear guardrails and measurable impact.</li>
</ul>
<ul>
<li>Legal data is structured, usable, and trusted across systems.</li>
</ul>
<ul>
<li>The legal tech stack has a clear, extensible architecture that supports rapid iteration.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>
<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216K – $240K</Salaryrange>
      <Skills>API integration and middleware, Light scripting or automation, CLM systems, Workflow orchestration and middleware, AI/agent services, Data platforms, Legal data architecture, Legal analytics enablement, Integrations, APIs &amp; middleware</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/2004c873-d6e3-41b7-96e2-12fd9faec7a4</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>4ca6fe6e-240</externalid>
      <Title>Partner Manager, OpenAI for Government</Title>
      <Description><![CDATA[<p>We are seeking a Partner Manager to join OpenAI for Government&#39;s Partnerships team to help lead and scale OpenAI&#39;s most important partner relationships. This role sits at the center of partner management, cross-functional execution, and internal governance. You will help shape how OpenAI works with major platform, cloud, systems integration, and go-to-market partners to unlock durable commercial outcomes, strengthen delivery readiness, and improve coordination across internal and external teams.</p>
<p>In this role, you&#39;ll:</p>
<ul>
<li>Own day-to-day management of a portfolio of key strategic partners, including Amazon Web Services (AWS) and other high-priority AI ecosystem relationships.</li>
<li>Serve as a central point of coordination for joint planning, governance, escalation management, and cross-functional execution.</li>
<li>Build and run partner operating cadences across leadership reviews, working teams, business development, technical alignment, and delivery stakeholders.</li>
<li>Help drive commercial outcomes by supporting co-sell motions, opportunity coordination, and execution against shared objectives.</li>
<li>Track partnership performance, milestones, dependencies, and risks across multiple workstreams and internal teams.</li>
<li>Coordinate internal resources across Go-to-Market, partnerships, product, legal, security, communications, policy, finance, and operations to support partner success.</li>
<li>Support implementation and operationalization of partnership agreements, statements of work, enablement plans, and governance structures.</li>
<li>Identify and resolve blockers across joint initiatives, including issues related to prioritization, delivery readiness, technical engagement, or executive alignment.</li>
<li>Help define the model for how OpenAI engages with partners such as cloud providers, strategic platforms, and other major partners over time.</li>
<li>Develop clear internal reporting on partner health, strategic priorities, and progress against business goals.</li>
<li>Drive consistency in how OpenAI manages partner communications, commitments, and cross-functional follow-through.</li>
<li>Contribute to long-term expansion strategies for strategic relationships, including identifying new areas for product, Go-to-Market, or delivery collaboration.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$239.4K – $295K</Salaryrange>
      <Skills>program management, cross-functional operations, strategic partnerships, cloud computing, artificial intelligence, machine learning, data platforms, cybersecurity, defense systems, AWS, Go-to-Market, product development, delivery models, partner ecosystems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing artificial general intelligence safely and beneficially.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/3dae15aa-27de-4d96-b4f2-d2186723b105</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2af5e0f2-31a</externalid>
      <Title>Member of Technical Staff - Software Engineer &amp; Machine Learning</Title>
      <Description><![CDATA[<p>As a Member of Technical Staff – Software Engineer &amp; Machine Learning, you will work building AI Insights, a Copilot analytics product that enables our internal stakeholders to move from “What happened?” to “Why did it happen?” in minutes. You’ll design and implement AI-driven trend detection, cohort analysis, and drill-down workflows that connect metrics to real user conversations, developing AI-based insights on large-scale multi-modal Copilot data part of the Microsoft AI (MAI) organization.</p>
<p>We’re looking for an experienced Machine Learning engineer with strong hands-on skills in machine learning, data platforms, and distributed systems to lead the development of AI Insights, a next-generation Copilot analytics product. The right candidate takes the initiative and enjoys building world-class consumer experiences and products in a fast-paced environment.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day, we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities: Build scalable data pipelines for telemetry ingestion, anomaly detection, and cohort segmentation. Implement ML-driven insights (prompted classifiers, anomaly detection) and integrate them into dashboards and APIs. Develop secure, compliant workflows for handling production logs and conversation data. Enable drill-down capabilities linking quantitative metrics to qualitative evidence for actionable context. Collaborate with PMs and DS to refine hypotheses and deliver intuitive, high-performance interfaces. Own technical strategy for trend detection, cohort analysis, and drill-down workflows linking quantitative metrics to qualitative conversation evidence. Prototype and productionize ML models for anomaly detection and predictive insights. Ensure compliance and security for data handling across telemetry, logs, and conversation datasets. Collaborate with PMs, data scientists, and UX to define roadmap and deliver intuitive, high-impact workflows. Independently write efficient, readable, extensible code and model pipelines. Commit to a customer-oriented focus by acknowledging customer needs and perspectives, validating customer perspectives, focusing on broader customer context, and serving as a trusted advisor. Hands-on with observability (metrics, tracing, logs) and model evaluation frameworks.</p>
<p>Qualifications: Required Qualifications: Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Preferred Qualifications: Master’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Proven experience leading small engineering and machine learning teams, and collaborating effectively with cross-functional stakeholders including product managers, UX designers, and security specialists. Demonstrated interest in Responsible AI.</p>
<p>#MicrosoftAI #mai-datainsights #mai-datainsights</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800–$234,700 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Machine Learning, Data Platforms, Distributed Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops software, services, and solutions for personal and enterprise use.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-software-engineer-machine-learning-6/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c8c30607-84a</externalid>
      <Title>Product Manager - Data Platform Studio</Title>
      <Description><![CDATA[<p>The Platform team creates the technology that enables Spotify to learn quickly and scale easily, enabling rapid growth in our users and our business around the globe. We are looking for a passionate Product Manager to join Spotify&#39;s Data Platform Studio. Data Platform&#39;s mission is to enable the application of data in an intuitive and efficient way,helping Spotify extract value from data at scale.</p>
<p>Data Platform is responsible for how data is collected, processed, stored, governed, and made available to the thousands of engineers, data scientists, and analysts who build Spotify&#39;s products. With AI agents increasingly writing data pipelines and powering personalization, this is one of the most consequential infrastructure domains at Spotify.</p>
<p>As a Product Manager on the Data Platform team, you will be responsible for defining and driving the strategy and roadmap for your product area, connecting it to the broader Platform and company strategy. You will evaluate solutions across build, buy, or partner approaches to improve how Spotify works with data. You will define success metrics and track adoption, creating a continuous loop of learning and iteration.</p>
<p>Collaboration is key to success in this role. You will work closely with engineering, design, and data science to deliver intuitive, high-impact solutions. You will build strong relationships with internal teams to gather feedback and co-evolve tooling and capabilities. You will influence stakeholders across the studio and company to align on direction and priorities.</p>
<p>To be successful in this role, you will need to have a strong understanding of data platforms, analytics, and AI/ML tooling. You will need to be able to communicate complex technical concepts into clear product direction and strategy. You will need to have a strong systems-thinking mindset and be able to connect workflows, platforms, and business needs.</p>
<p>If you are passionate about data and want to join a team that is shaping the future of music streaming, please apply!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, analytics, AI/ML tooling, product management, data infrastructure, developer tools, platform products</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Spotify</Employername>
      <Employerlogo>https://logos.yubhub.co/spotify.com.png</Employerlogo>
      <Employerdescription>Spotify is a music streaming service that provides access to millions of songs, podcasts, and videos. It was founded in 2006 and is headquartered in Stockholm, Sweden.</Employerdescription>
      <Employerwebsite>https://www.spotify.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/spotify/03437e2a-2d5e-4593-9e97-11271014932e</Applyto>
      <Location>Toronto</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7d23b7cf-337</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Do you enjoy solving complex technical problems on a global scale?</p>
<p>Microsoft AI Monetization enables advertisers to measure impact and optimize spend through secure, privacy-preserving data collaboration. The Measurement and Data Collaboration Engineering team is responsible for building the next generation of privacy-safe measurement systems that allow advertisers and partners to work with data in highly secure environments. Our platform integrates Microsoft’s Azure Confidential Compute Clean Room (ACCR) with third-party clean room partners to deliver a unified, compliant, and scalable measurement ecosystem. We are looking for a Senior Software Engineer who is passionate about distributed systems, privacy-enhancing technologies, secure data processing, and building reliable production services with global impact.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build highly scalable backend services and data pipelines that support privacy-preserving measurement and analytics scenarios using C# or Java.</li>
<li>Design secure data collaboration workflows across multiple parties using modern privacy technologies, governance controls, and minimum-aggregation protections.</li>
<li>Drive integrations with external data and measurement partners, designing stable interfaces, schema governance patterns, and robust validation.</li>
<li>Lead initiatives to make delivery of high-quality software routine and efficient through the entire software development lifecycle, from inception and technical design through testing and excellence in production operations.</li>
<li>Collaborate closely with product, data science, privacy, and security teams to translate measurement needs into scalable platform capabilities.</li>
<li>Contribute to engineering team best practices leveraging AI dev tools across the software development lifecycle (SDLC).</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s degree in computer science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>
<li>5+ years of experience building and operating large-scale distributed systems, backend services, or data platforms.</li>
<li>Experience with large-scale data processing frameworks (e.g. Spark, SQL-based pipelines) and cloud platforms.</li>
<li>Understanding of secure data processing, encryption, identity, and access control.</li>
<li>Experience building and operating services with strict SLAs.</li>
<li>Experience with Azure.</li>
<li>Background in advertising, marketing technology, attribution, or large-scale analytics.</li>
<li>Experience integrating third-party (vendor/partner) platforms, identity systems, or data collaboration technologies.</li>
<li>Solid problem-solving skills with a focus on reliability, observability, and system design.</li>
</ul>
<p>#MicrosoftAI Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 – $258,000 per year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>C#, Java, JavaScript, Python, Azure, Spark, SQL, Cloud platforms, Secure data processing, Encryption, Identity, Access control, SLAs, Distributed systems, Backend services, Data platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI Monetization enables advertisers to measure impact and optimize spend through secure, privacy-preserving data collaboration.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-131/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1cd97be5-191</externalid>
      <Title>Engineering Leadership - Music Mission Team</Title>
      <Description><![CDATA[<p>The Music Mission team at Spotify is responsible for building tools and services to enable creation, promotion, expression, and monetization at scale. As part of this team, we&#39;re looking for an experienced engineering leader to lead a team of engineers across backend, client, and data disciplines.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead and grow a team of engineers across multiple squads</li>
<li>Set clear technical and organizational direction across a complex, full-stack product area</li>
<li>Partner closely with Product, Design, Insights, and Marketing to deliver impactful features for artists</li>
<li>Drive execution across multiple concurrent initiatives, ensuring strong coordination and delivery</li>
<li>Translate ambiguity into clear priorities, helping teams navigate competing demands and inbound requests</li>
<li>Oversee the development of scalable systems that process and surface large volumes of artist data</li>
<li>Support critical platform operations and contribute to incident response readiness through team leadership</li>
<li>Contribute to strategic initiatives including AI-driven experiences and artist-focused product innovation</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of experience in software engineering, with significant experience in engineering leadership, including managing managers</li>
<li>Experience leading multiple teams or orgs simultaneously and operating across a broad, multi-disciplinary technical surface</li>
<li>Strong technical foundation and ability to engage in discussions across backend systems, client applications, and data platforms</li>
<li>Ability to navigate organizational complexity and drive alignment across senior cross-functional stakeholders</li>
<li>Experience building inclusive, high-performing teams and developing managers and senior engineers</li>
<li>Calm and structured in high-pressure situations, able to manage competing priorities and guide teams through critical decisions</li>
<li>Experience operating in environments with high data scale and understanding how to translate complex data into meaningful user experiences</li>
<li>Motivated by building products that serve creators, artists, or similar user communities</li>
</ul>
<p>Benefits:</p>
<ul>
<li>United States base range for this position is $203,410–$290,586 USD, plus equity</li>
<li>Health insurance</li>
<li>Six-month paid parental leave</li>
<li>401(k) retirement plan</li>
<li>Monthly meal allowance</li>
<li>23 paid days off</li>
<li>Paid flexible holidays</li>
<li>Paid sick leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$203,410–$290,586 USD</Salaryrange>
      <Skills>software engineering, engineering leadership, backend systems, client applications, data platforms, AI-driven experiences, artist-focused product innovation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Spotify</Employername>
      <Employerlogo>https://logos.yubhub.co/spotify.com.png</Employerlogo>
      <Employerdescription>Spotify is a music streaming service that offers users access to millions of songs, podcasts, and videos. It has over 650,000 monthly active users across web and mobile.</Employerdescription>
      <Employerwebsite>https://www.spotify.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/spotify/df494b56-c5d2-4858-a980-c082b3de65c9</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1497165b-dfb</externalid>
      <Title>Senior Director, Data Platform and Engineering</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>The Technology, Data, and Intelligence Team Okta is the leading independent identity provider. The Technology, Data, and Intelligence (TDI) organisation is the engine that powers Okta&#39;s global workforce, providing the technology and systems that enable our employees to do their best work.</p>
<p>The Opportunity Okta is seeking a visionary and results-driven Senior Director, Data Platform and Engineering to lead a global data and analytic engineering team ensuring our data assets are leveraged to their full potential. Reporting to the VP of Data and Insights this role requires a leader who is as comfortable with a technical deep dive as they are with a strategic business discussion. You will be a &#39;player-coach&#39; who can build and mentor a world-class team while personally driving strategic initiatives and ensuring the integrity of our data foundation.</p>
<p>A core part of your responsibility will be to champion and enable our AI strategy and technical foundations, ensuring that clean, trusted, and well-governed data is the foundation for all AI initiatives. With the growth of AI in our Products and our business, Okta’s data is more critical to our success than ever. This role will make sure we continue to support the organisation run and grow, while building out the platform for the future. Candidates should be energised by that challenge.</p>
<p>What You&#39;ll Do</p>
<p>Lead and Inspire a High-Performing Team: Build, mentor, and scale a diverse, high-performing team of data and analytics engineers, fostering a culture of excellence, collaboration, and continuous learning.</p>
<p>Champion AI Enablement: Act as a critical partner to the AI Engineering teams. Ensure our data and data infrastructure and practices are optimised to support and accelerate AI development and deployment. This includes defining data governance standards, building high-quality training datasets, and developing scalable data pipelines for AI/ML models.</p>
<p>Advance the Data Foundation: Own ‘back end’ data lifecycle for critical business domains, from data ingestion and ETL/ELT pipelines to data modelling, quality assurance, and governance. Ensure a single source of truth for key metrics across the organisation.</p>
<p>Enhance Data Security and Governance: Partner with Security leaders to define and implement policies and systems to safeguard data privacy, ensure regulatory compliance, and protect sensitive information. This includes establishing robust data access controls, audit trails, and data retention policies.</p>
<p>Partner Cross-Functionally: Collaborate closely with technology and business partners, including Product and Engineering leadership, to embed data and analytics into both internal and customer-facing product development.</p>
<p>What You Bring</p>
<p>Experience: 10+ years of progressive experience in data platform, and data engineering, with at least 5+ years in a leadership role.</p>
<p>AI Know How: Demonstrated experience in building data foundations and pipelines specifically to support and accelerate AI and machine learning initiatives.</p>
<p>Technical Expertise: Deep knowledge of modern data stacks, including data warehousing, ETL/ELT pipelines, data modelling, and Tableau. Proficiency with Python, SQL, and experience with cloud-based data environments that enable AI use-cases (e.g., AWS, GCP, Azure).</p>
<p>Governance: Proven ability to build secure solutions for both commercial and public sectors. Experience with FedRamp and the public sector cloud strongly preferred.</p>
<p>Cross-Functional Collaboration: Exceptional ability to translate technical needs across product and IT platforms. Must also have the ability to communicate complex data concepts to both technical and non-technical audiences.</p>
<p>Team Development: A track record of successfully hiring, mentoring, and leading high-performing data teams.</p>
<p>Problem-Solving: A strategic mindset and a passion for solving ambiguous, complex business problems with a rigorous, data-driven approach.</p>
<p>#LI-MC1 #LI-Hybrid #P14112_3419375</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$276,000-$379,500 USD</Salaryrange>
      <Skills>data platform, data engineering, AI, machine learning, Python, SQL, cloud-based data environments, AWS, GCP, Azure, data warehousing, ETL/ELT pipelines, data modelling, Tableau</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a leading independent identity provider. It powers global workforces with technology and systems that enable employees to do their best work.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7843125</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>36d7aa8a-044</externalid>
      <Title>Member of Technical Staff - Software Engineer &amp; Machine Learning</Title>
      <Description><![CDATA[<p>As a Member of Technical Staff – Software Engineer &amp; Machine Learning, you will work building AI Insights, a Copilot analytics product that enables our internal stakeholders to move from “What happened?” to “Why did it happen?” in minutes. You’ll design and implement AI-driven trend detection, cohort analysis, and drill-down workflows that connect metrics to real user conversations, developing AI-based insights on large-scale multi-modal Copilot data part of the Microsoft AI (MAI) organization.</p>
<p>We’re looking for an experienced Machine Learning engineer with strong hands-on skills in machine learning, data platforms, and distributed systems to lead the development of AI Insights, a next-generation Copilot analytics product. The right candidate takes the initiative and enjoys building world-class consumer experiences and products in a fast-paced environment.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day, we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<ul>
<li>Build scalable data pipelines for telemetry ingestion, anomaly detection, and cohort segmentation.</li>
<li>Implement ML-driven insights (prompted classifiers, anomaly detection) and integrate them into dashboards and APIs.</li>
<li>Develop secure, compliant workflows for handling production logs and conversation data.</li>
<li>Enable drill-down capabilities linking quantitative metrics to qualitative evidence for actionable context.</li>
<li>Collaborate with PMs and DS to refine hypotheses and deliver intuitive, high-performance interfaces.</li>
<li>Own technical strategy for trend detection, cohort analysis, and drill-down workflows linking quantitative metrics to qualitative conversation evidence.</li>
<li>Prototype and productionize ML models for anomaly detection and predictive insights.</li>
<li>Ensure compliance and security for data handling across telemetry, logs, and conversation datasets.</li>
<li>Collaborate with PMs, data scientists, and UX to define roadmap and deliver intuitive, high-impact workflows.</li>
<li>Independently write efficient, readable, extensible code and model pipelines.</li>
<li>Commit to a customer-oriented focus by acknowledging customer needs and perspectives, validating customer perspectives, focusing on broader customer context, and serving as a trusted advisor.</li>
</ul>
<p>Hands-on with observability (metrics, tracing, logs) and model evaluation frameworks.</p>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
<li>Proven experience leading small engineering and machine learning teams, and collaborating effectively with cross-functional stakeholders including product managers, UX designers, and security specialists.</li>
<li>Demonstrated interest in Responsible AI.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Machine Learning, Data Platforms, Distributed Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-software-engineer-machine-learning-4/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>bd829e13-6ce</externalid>
      <Title>Member of Technical Staff - Data Infrastructure Manager</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate leaders to help us tackle the most interesting and challenging AI questions of our time. Our vision is bold and broad, to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all, consumers, businesses, developers, so that everyone can realize its benefits.</p>
<p>We’re looking for a Data Infrastructure Manager to lead a team of talented engineers building and scaling the data infrastructure that powers Microsoft’s consumer AI. This role sits at the intersection of technical leadership and people management. You’ll set the technical direction for large-scale data and ML pipelines, AI agentic workflows, and intelligent systems while growing a high-performing team of ICs.</p>
<p>If you’ve architected big data platforms from the ground up and are now ready to multiply your impact through others, including on some of the most exciting AI infrastructure challenges in the industry, we want to hear from you.</p>
<p>Deep technical expertise in big data and distributed systems A track record of leading and developing engineering talent A passion for automation, observability, and operational excellence The ability to translate complex technical strategy into clear, executable plans Empathy, collaboration, and a growth mindset</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of Respect, Integrity, and Accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Team Leadership &amp; People Development Hire, mentor, and develop a team of Data Infrastructure Engineers, fostering a culture of technical excellence, ownership, and continuous growth. Conduct regular 1:1s, set clear goals, and provide actionable feedback to support each engineer’s career development. Build and sustain an inclusive, collaborative team environment aligned with Microsoft’s values of Respect, Integrity, Accountability, and Inclusion.</p>
<p>Technical Strategy &amp; Architecture Define and drive the technical vision for a scalable, reliable, and observable Big Data Infrastructure serving mission-critical AI applications, including agentic and intelligent systems. Lead technical design reviews, establish engineering standards, and ensure a clean, secure, and well-documented codebase. Partner with engineers to architect data solutions across storage, compute, and analytics layers, including the pipelines and orchestration frameworks that underpin AI agent workflows, balancing long-term scalability with near-term delivery.</p>
<p>Platform &amp; Operations Champion DevOps and SRE best practices across the team, including automated deployments, service monitoring, and incident response. Guide the team in building a self-service big data platform that empowers data engineers, researchers, and partner teams. Oversee robust CI/CD pipelines and infrastructure-as-code practices using tools like Bicep, Terraform, and ARM. Lead capacity planning and drive proactive resolution of bottlenecks in data pipelines and infrastructure.</p>
<p>Cross-Functional Collaboration Act as a key technical partner to Data Engineers, Data Scientists, AI Researchers, ML Engineers, and Developers to deliver secure, seamless big data workflows. Collaborate with Security teams to uphold strong infrastructure security practices (IAM, OAuth, Kerberos). Represent the team in planning and prioritization discussions, translating organizational goals into actionable engineering roadmaps.</p>
<p>Qualifications Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</p>
<p>Preferred Qualifications Master’s Degree in Computer Science or related technical field AND 10+ years of technical engineering experience OR Bachelor’s Degree AND 14+ years, OR equivalent experience. 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments. 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize. Solid scripting and automation fluency in Python, Bash, or PowerShell. Proven track record managing CI/CD pipelines, release automation, and production incident response. Hands-on expertise with modern data platforms like Databricks, including deep familiarity with relational and NoSQL databases, key-value stores, Spark compute engines, distributed file systems (e.g., HDFS, ADLS Gen2), and messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP.</p>
<p>#MicrosoftAI #MAIDPS #mai-datainsights #mai-datainsights</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,000 per year</Salaryrange>
      <Skills>Big Data and Distributed Systems, Data Infrastructure, DevOps, SRE, Platform Engineering, Distributed Systems, Containerized Application Deployments, Kubernetes, Helm/Kustomize, Python, Bash, PowerShell, CI/CD Pipelines, Release Automation, Production Incident Response, Modern Data Platforms, Databricks, Relational and NoSQL Databases, Key-Value Stores, Spark Compute Engines, Distributed File Systems, Messaging Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Agentic Workflow Infrastructure, Orchestration Frameworks, Retrieval Pipelines, Multi-Agent Systems, Web Stacks, TypeScript, Node.js, React, PHP, Master’s Degree in Computer Science or related technical field, 10+ years of technical engineering experience, Bachelor’s Degree and 14+ years, Equivalent experience, 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering, 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments, 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize, Solid scripting and automation fluency in Python, Bash, or PowerShell, Proven track record managing CI/CD pipelines, release automation, and production incident response, Hands-on expertise with modern data platforms like Databricks, Proven experience with cloud-native infrastructure across Azure, AWS, or GCP, Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams, Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale, Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-infrastructure-manager-microsoft-ai-copilot-3/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f2022f1e-c1d</externalid>
      <Title>Staff Technical Program Manager, Monetization Data Science</Title>
      <Description><![CDATA[<p>About Pinterest:</p>
<p>Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime.</p>
<p>At Pinterest, we&#39;re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.</p>
<p>Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other&#39;s unique experiences and embrace the flexibility to do your best work.</p>
<p>Creating a career you love? It&#39;s Possible.</p>
<p>At Pinterest, AI isn&#39;t just a feature, it&#39;s a powerful partner that augments our creativity and amplifies our impact, and we&#39;re looking for candidates who are excited to be a part of that.</p>
<p>To get a complete picture of your experience and abilities, we&#39;ll explore your foundational skills and how you collaborate with AI.</p>
<p>Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think.</p>
<p>You can read more about our AI interview philosophy and how we use AI in our recruiting process here.</p>
<p>The Team:</p>
<p>Pinterest helps people find inspiration and take action on it,connecting pinners with ideas and products they love.</p>
<p>Within EPD, the Monetization org builds the ads and merchant ecosystem that funds Pinterest’s business while protecting long-term user experience.</p>
<p>This Staff TPM role sits in Monetization as the TPM lead for Monetization Data Science, at the center of a highly cross-functional network (Product, Engineering, Design, Sales, PMM, Core, Platforms, Data).</p>
<p>What’s exciting is the team’s explicit shift toward a “data-driven monetization engine”: unifying fragmented data into a trusted SSOT, building an end-to-end input metrics funnel, enabling advanced segmentation, and democratizing analytics so teams can move faster and make better decisions with shared context.</p>
<p>What you’ll do:</p>
<p>Lead the Monetization DS execution roadmap: drive the integrated plan across the four strategic pillars (SSOT + funnel, segmentation, input-metrics cadence, democratized analytics) with clear milestones and success measures.</p>
<p>Productionalize our DS strategy: coordinate Platforms/Data Eng + Monetization Eng + DS to productionalize core tables, governance, reliability, and scale beyond DS-owned pipelines.</p>
<p>Enable new instrumentation: partner with Engineering to close observability gaps (especially delivery funnel instrumentation) so full-funnel survivability can be analyzed reliably.</p>
<p>Drive workflow automation: reduce manual human intervention in recurring data workflows and program operations; build durable mechanisms for monitoring, alerting, and dependency tracking.</p>
<p>Scale self-serve and democratization: deliver partner-facing tooling (dashboards / analytics surfaces) that makes staples the common language and supports fast diagnostics and opportunity mining.</p>
<p>Operationalize input metrics: establish/upgrade business review cadences so teams set goals and are accountable for moving controllable input metrics (not just reporting revenue outcomes).</p>
<p>Drive targeted deep dives: structure and execute cross-functional deep-dive programs (e.g., influencer population, auction density/demand) with clear hypotheses, decision asks, and downstream action plans.</p>
<p>Use GenAI as the default operating model for EP PgM execution,producing AI-assisted first drafts of core program artifacts, modernizing high-toil workflows into AI-first mechanisms (e.g., intake triage, status synthesis, action/decision extraction, risk &amp; dependency tracking), and synthesizing signals to proactively surface risks, decision/trade-offs, and escalation paths.</p>
<p>Prototype solutions to augment decisions through data (e.g. dashboards, data analysis) or simplify processes (e.g. process and workflow helpers, or internal tools) using AI coding assistants (“vibe coding”).</p>
<p>Follow Pinterest AI guidance for risk, governance, and safety-by-design: appropriately handle sensitive data, validate AI-generated outputs, document assumptions/limits, and ensure AI-assisted workflows meet applicable policy/compliance expectations before broad adoption.</p>
<p>What we’re looking for:</p>
<p>Staff-level TPM scope and behaviors: proven ability to independently own multi-team, multi-quarter technical programs, including resolving ambiguity, driving decisions, and delivering outcomes through influence.</p>
<p>Deep cross-functional leadership: strong partnership with Product and Engineering plus ability to align Design, Sales, PMM, Core, Platforms, and Data on sequencing, tradeoffs, and adoption.</p>
<p>Data platform + metrics judgment: experience building trusted metrics/SSOT and operational cadences that shift org behavior toward leading indicators and fast diagnosis.</p>
<p>Mechanism builder, not “process administrator”: track record of creating durable operating systems (cadence, dashboards, decision logs, RACI/DRIs) that reduce toil and increase velocity.</p>
<p>Excellent risk and dependency management: anticipates cross-org failure modes, keeps stakeholders aligned with crisp comms, and escalates with clear options and recommendations.</p>
<p>AI-first execution mindset: demonstrated ability to use GenAI to accelerate planning, program operations, and stakeholder communications,starting with AI drafts and applying strong judgment to validate, refine, and drive decisions.</p>
<p>Workflow design, AI fluency, data &amp; insights orientation: experience turning repeatable program work into durable, low-toil mechanisms and improving decision-making by using GenAI (e.g., strong prompting, vibe coding lightweight scripts/tools, dashboards, data analysis and leveraging agents where appropriate)</p>
<p>Safety-by-design AI fluency: experience operating within AI governance expectations (risk assessment, data handling, model/output validation, auditability/traceability) and proactively identifying where AI use is not appropriate or requires additional controls.</p>
<p>Bachelor’s degree in Computer Science, Engineering, a related field or equivalent experience.</p>
<p>Relocation Statement:</p>
<p>This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.</p>
<p>In-Office Requirement Statement:</p>
<p>We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.</p>
<p>This role will need to be in the office for in-person collaboration 1-2 times every 6-months and therefore can be situated anywhere in the country.</p>
<p>#LI-REMOTE #LI-JD3</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$145,747-$300,067 USD</Salaryrange>
      <Skills>Staff-level TPM scope and behaviors, Deep cross-functional leadership, Data platform + metrics judgment, Mechanism builder, not “process administrator”, Excellent risk and dependency management, AI-first execution mindset, Workflow design, AI fluency, data &amp; insights orientation, Safety-by-design AI fluency</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save images and videos.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7494686</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1e152dd5-a03</externalid>
      <Title>Machine Learning Software Engineer</Title>
      <Description><![CDATA[<p>As a Member of Technical Staff – Software Engineer &amp; Machine Learning, you will work building AI Insights, a Copilot analytics product that enables our internal stakeholders to move from “What happened?” to “Why did it happen?” in minutes. You’ll design and implement AI-driven trend detection, cohort analysis, and drill-down workflows that connect metrics to real user conversations.</p>
<p>In developing AI based insights on large scale multi modal Copilot data part of the Microsoft AI (MAI) organization. We’re looking for a experienced Machine Learning engineer with strong hands-on skills in machine learning, data platforms, and distributed systems to lead the development of AI Insights, a next-generation Copilot analytics product. The right candidate takes the initiative and enjoys building world-class consumer experiences and products in a fast-paced environment.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities: Build scalable data pipelines for telemetry ingestion, anomaly detection, and cohort segmentation. Implement ML-driven insights (prompted classifiers, anomaly detection) and integrate them into dashboards and APIs. Develop secure, compliant workflows for handling production logs and conversation data. Enable drill-down capabilities linking quantitative metrics to qualitative evidence for actionable context. Collaborate with PMs and DS to refine hypotheses and deliver intuitive, high-performance interfaces. Own technical strategy for trend detection, cohort analysis, and drill-down workflows linking quantitative metrics to qualitative conversation evidence. Prototype and productionize ML models for anomaly detection and predictive insights. Ensure compliance and security for data handling across telemetry, logs, and conversation datasets. Collaborate with PMs, data scientists, and UX to define roadmap and deliver intuitive, high-impact workflows. Independently write efficient, readable, extensible code and model pipelines. Commit to a customer-oriented focus by acknowledging customer needs and perspectives, validating customer perspectives, focusing on broader customer context, and serving as a trusted advisor. Hands-on with observability (metrics, tracing, logs) and model evaluation frameworks.</p>
<p>Qualifications: Required Qualifications: Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Preferred Qualifications: Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Proven experience leading small engineering and machine learning teams, and collaborating effectively with cross-functional stakeholders including product managers, UX designers, and security specialists. Demonstrated interest in Responsible AI.</p>
<p>#MicrosoftAI #mai-datainsights #mai-datainsights Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year. Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Machine Learning, Data Platforms, Distributed Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/machine-learning-software-engineer-5/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5c9aa0a4-f07</externalid>
      <Title>Sr. Engineering Manager, CAMP</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Sr. Engineering Manager to lead the Content Acquisition &amp; Media Platform (CAMP) team. The CAMP team owns the critical platforms for content acquisition, media ingestion and media signal processing. The team closely collaborates with multiple partner teams bringing the most inspiration content to Pinterest to enhance the content foundation of all products, adopting LLM/VLM in a near real-time system to understand content and build fundamental signals for LLM innovations, and standing in the frontier to meet GenAI compliance requirements.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading a cross-platform team responsible for content acquisition, media ingestion and media signal platform.</li>
<li>Setting the team&#39;s technical direction and driving execution on quality, performance, and reliability.</li>
<li>Partnering closely with Content, Shopping catalog and GenAI teams to improve Pinterest content ecosystem.</li>
<li>Guiding the team through platform and architectural decisions, balancing product needs, technical investments, and long-term maintainability.</li>
<li>Building and growing a high-performing team through hiring, coaching, and developing engineers.</li>
<li>Raising the bar on engineering excellence through strong operational practices, performance measurement, testing, and release quality.</li>
<li>Helping Pinterest to meet GenAI compliance requirements.</li>
</ul>
<p>The ideal candidate will have 9+ years of software engineering experience, including distributed system and big data platform, and 5+ years of engineering management experience leading strong client or platform engineering teams.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$245,402-$429,454 USD</Salaryrange>
      <Skills>software engineering, distributed system, big data platform, engineering management, content acquisition, media ingestion, media signal processing, LLM/VLM, GenAI compliance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference. It has millions of users worldwide.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7680804</Applyto>
      <Location>San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>dacc9b06-4d8</externalid>
      <Title>Member of Technical Staff - Principal Data Infrastructure Engineer</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>
<p>We’re looking for a Member of Technical Staff – Principal Data Infrastructure Engineer. This role is a dynamic blend of Platform Engineering, DevOps/SRE, and Big Data Infrastructure Engineering, focused on enabling large-scale data and ML pipelines and intelligent systems. If you’ve architected big data platforms from the ground up and are eager to apply that expertise to consumer AI, we want to hear from you.</p>
<p>You’ll bring:</p>
<p>Deep technical expertise A passion for automation and observability Fluency in distributed systems Creativity to design scalable solutions And just as importantly: empathy, collaboration, and a growth mindset</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<p>Architect and maintain scalable, reliable, and observable Big Data Infrastructure for mission-critical AI applications. Champion DevOps and SRE best practices,automated deployments, service monitoring, and incident response. Build a self-service big data platform that empowers data and platform engineers and researchers. Develop robust CI/CD pipelines and automate infrastructure provisioning using Infrastructure as Code tools (Bicep, Terraform, ARM). Collaborate with Data Engineers, Data Scientists, AI Researchers, and Developers to deliver secure, seamless big data workflows. Lead technical design reviews and uphold a clean, secure, and well-documented codebase. Proactively identify and resolve bottlenecks in data pipelines and infrastructure. Optimize system performance across storage, compute, and analytics layers. Partner with Security teams to enhance system security (IAM, OAuth, Kerberos). Embody and promote Microsoft’s values: Respect, Integrity, Accountability, and Inclusion.</p>
<p>Qualifications:</p>
<p>Required Qualifications: Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.</p>
<p>Preferred Qualifications: 4+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 3+ years of hands-on experience managing and scaling distributed systems,from bare-metal to cloud-native environments. 2+ years deploying containerized applications using Kubernetes and Helm/Kustomize. Solid scripting and automation skills using Python, Bash, or PowerShell. Proven success in CI/CD pipeline management, release automation, and production troubleshooting. Experience working with Databricks for scalable data processing and analytics. Familiarity with security practices in infrastructure environments, including IAM, OAuth, and Kerberos administration. Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Hands-on expertise with modern data platforms like Databricks, including: Deep understanding of data storage and processing technologies: Relational &amp; NoSQL databases Key-value stores. Spark compute engines. Distributed file systems (e.g., HDFS, ADLS Gen2). Messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Capacity planning and incident management for large-scale big data systems. Solid collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP. Exposure to agentic workflows, deep learning, or AI frameworks. Practical experience integrating LLMs (e.g., GPT-based models) into daily workflows,automating documentation, code generation, reviews, and operational intelligence. Solid grasp of prompt engineering techniques to design, optimize, and evaluate interactions with LLMs. Demonstrated ability to troubleshoot and resolve complex performance and scalability issues across infrastructure layers. Excellent interpersonal and communication skills, with a solid passion for mentorship and continuous learning. Experience applying LLMs to DevOps workflows, enhancing incident response, and streamlining cross-functional collaboration is a solid advantage.</p>
<p>#MicrosoftAI #mai-datainsights #mai-datainsights</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>Big Data Infrastructure, DevOps, SRE, Platform Engineering, Distributed Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Databricks, CI/CD Pipelines, Infrastructure as Code, Bicep, Terraform, ARM, Python, Bash, PowerShell, Kubernetes, Helm, Kustomize, LLMs, GPT-based models, Prompt Engineering, Agentic Workflows, Deep Learning, AI Frameworks, Containerized Applications, Security Practices, IAM, OAuth, Kerberos Administration, Web Stacks, TypeScript, Node.js, React, PHP, Modern Data Platforms, Spark Compute Engines, Distributed File Systems, Messaging Systems, Capacity Planning, Incident Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-principal-data-infrastructure-engineer-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f5de48cc-0a9</externalid>
      <Title>Machine Learning Engineering Manager – AI Insights</Title>
      <Description><![CDATA[<p>As a Machine Learning Engineering Manager, you will work on developing AI-based insights on large-scale multi-modal Copilot data part of the Microsoft AI (MAI) organization.</p>
<p>We&#39;re looking for an experienced Engineering Manager with solid hands-on skills in machine learning, data platforms, and distributed systems to lead the development of AI Insights, a next-generation Copilot analytics product.</p>
<p>We&#39;re looking for someone with experience leading product development leveraging data pipelines, data science, and machine learning, as well as a solid communicator and great teammate.</p>
<p>The right candidate takes the initiative and enjoys building world-class consumer experiences and products in a fast-paced environment.</p>
<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more.</p>
<p>As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</p>
<p>Each day, we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and mentor a team of engineers building AI-powered analytics for Copilot usage and quality metrics.</li>
</ul>
<ul>
<li>Own technical strategy for trend detection, cohort analysis, and drill-down workflows linking quantitative metrics to qualitative conversation evidence.</li>
</ul>
<ul>
<li>Prototype and productionize ML models for anomaly detection and predictive insights.</li>
</ul>
<ul>
<li>Ensure compliance and security for data handling across telemetry, logs, and conversation datasets.</li>
</ul>
<ul>
<li>Collaborate with PMs, data scientists, and UX to define roadmap and deliver intuitive, high-impact workflows.</li>
</ul>
<ul>
<li>Drive integration with existing platforms (Azure Data Bricks and other Microsoft internal systems) and ensure reliability, scalability, and cost efficiency.</li>
</ul>
<ul>
<li>Have solid experience in ML systems, anomaly detection, and large-scale data processing.</li>
</ul>
<ul>
<li>Generalize machine learning (ML) solutions into repeatable frameworks.</li>
</ul>
<ul>
<li>Independently write efficient, readable, extensible code and model pipelines.</li>
</ul>
<ul>
<li>Commit to a customer-oriented focus by acknowledging customer needs and perspectives, validating customer perspectives, focusing on broader customer context, and serving as a trusted advisor.</li>
</ul>
<ul>
<li>Be hands-on with observability (metrics, tracing, logs) and model evaluation frameworks.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<ul>
<li>Proven experience leading small engineering and machine learning teams, and collaborating effectively with cross-functional stakeholders including product managers, UX designers, and security specialists.</li>
</ul>
<ul>
<li>Experience writing production-quality Python or Java code.</li>
</ul>
<ul>
<li>Demonstrated interest in Responsible AI.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Machine Learning, Data Platforms, Distributed Systems, Data Pipelines, Data Science, Anomaly Detection, Predictive Insights, Compliance, Security, Azure Data Bricks, Observability, Model Evaluation, Responsible AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/machine-learning-engineering-manager-ai-insights-6/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5928e394-575</externalid>
      <Title>Machine Learning Engineering Manager – AI Insights</Title>
      <Description><![CDATA[<p>As a Machine Learning Engineering Manager, you will work on developing AI-based insights on large-scale multi-modal Copilot data part of the Microsoft AI (MAI) organization.</p>
<p>We&#39;re looking for an experienced Engineering Manager with solid hands-on skills in machine learning, data platforms, and distributed systems to lead the development of AI Insights, a next-generation Copilot analytics product.</p>
<p>We&#39;re looking for someone with experience leading product development leveraging data pipelines, data science, and machine learning, as well as a solid communicator and great teammate.</p>
<p>The right candidate takes the initiative and enjoys building world-class consumer experiences and products in a fast-paced environment.</p>
<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more.</p>
<p>As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</p>
<p>Each day, we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location.</p>
<p>This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<p>Lead and mentor a team of engineers building AI-powered analytics for Copilot usage and quality metrics.</p>
<p>Own technical strategy for trend detection, cohort analysis, and drill-down workflows linking quantitative metrics to qualitative conversation evidence.</p>
<p>Prototype and productionize ML models for anomaly detection and predictive insights.</p>
<p>Ensure compliance and security for data handling across telemetry, logs, and conversation datasets.</p>
<p>Collaborate with PMs, data scientists, and UX to define roadmap and deliver intuitive, high-impact workflows.</p>
<p>Drive integration with existing platforms (Azure Data Bricks and other Microsoft internal systems) and ensure reliability, scalability, and cost efficiency.</p>
<p>Have solid experience in ML systems, anomaly detection, and large-scale data processing.</p>
<p>Generalize machine learning (ML) solutions into repeatable frameworks.</p>
<p>Independently write efficient, readable, extensible code and model pipelines.</p>
<p>Commit to a customer-oriented focus by acknowledging customer needs and perspectives, validating customer perspectives, focusing on broader customer context, and serving as a trusted advisor.</p>
<p>Be hands-on with observability (metrics, tracing, logs) and model evaluation frameworks.</p>
<p>Qualifications:</p>
<p>Required Qualifications:</p>
<p>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Preferred Qualifications:</p>
<p>Master&#39;s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor&#39;s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Proven experience leading small engineering and machine learning teams, and collaborating effectively with cross-functional stakeholders including product managers, UX designers, and security specialists.</p>
<p>Experience writing production-quality Python or Java code.</p>
<p>Demonstrated interest in Responsible AI.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Machine Learning, Data Platforms, Distributed Systems, Data Pipelines, Data Science, Observability, Model Evaluation Frameworks, Responsible AI, Cloud Computing, DevOps, Agile Methodologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/machine-learning-engineering-manager-ai-insights-5/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9680484f-e47</externalid>
      <Title>Member of Technical Staff - Software Engineer &amp; Machine Learning</Title>
      <Description><![CDATA[<p>As a Member of Technical Staff – Software Engineer &amp; Machine Learning, you will work building AI Insights, a Copilot analytics product that enables our internal stakeholders to move from “What happened?” to “Why did it happen?” in minutes. You’ll design and implement AI-driven trend detection, cohort analysis, and drill-down workflows that connect metrics to real user conversations, developing AI-based insights on large-scale multi-modal Copilot data part of the Microsoft AI (MAI) organization.</p>
<p>We’re looking for an experienced Machine Learning engineer with strong hands-on skills in machine learning, data platforms, and distributed systems to lead the development of AI Insights, a next-generation Copilot analytics product. The right candidate takes the initiative and enjoys building world-class consumer experiences and products in a fast-paced environment.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day, we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Responsibilities:</p>
<ul>
<li>Build scalable data pipelines for telemetry ingestion, anomaly detection, and cohort segmentation.</li>
<li>Implement ML-driven insights (prompted classifiers, anomaly detection) and integrate them into dashboards and APIs.</li>
<li>Develop secure, compliant workflows for handling production logs and conversation data.</li>
<li>Enable drill-down capabilities linking quantitative metrics to qualitative evidence for actionable context.</li>
<li>Collaborate with PMs and DS to refine hypotheses and deliver intuitive, high-performance interfaces.</li>
<li>Own technical strategy for trend detection, cohort analysis, and drill-down workflows linking quantitative metrics to qualitative conversation evidence.</li>
<li>Prototype and productionize ML models for anomaly detection and predictive insights.</li>
<li>Ensure compliance and security for data handling across telemetry, logs, and conversation datasets.</li>
<li>Collaborate with PMs, data scientists, and UX to define roadmap and deliver intuitive, high-impact workflows.</li>
<li>Independently write efficient, readable, extensible code and model pipelines.</li>
<li>Commit to a customer-oriented focus by acknowledging customer needs and perspectives, validating customer perspectives, focusing on broader customer context, and serving as a trusted advisor.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
<li>Proven experience leading small engineering and machine learning teams, and collaborating effectively with cross-functional stakeholders including product managers, UX designers, and security specialists.</li>
<li>Demonstrated interest in Responsible AI.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 – $234,700 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Machine Learning, Data Platforms, Distributed Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-software-engineer-machine-learning-5/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8dfa46b7-eaa</externalid>
      <Title>Machine Learning Software Engineer</Title>
      <Description><![CDATA[<p>As a Member of Technical Staff – Software Engineer &amp; Machine Learning, you will work building AI Insights, a Copilot analytics product that enables our internal stakeholders to move from “What happened?” to “Why did it happen?” in minutes. You’ll design and implement AI-driven trend detection, cohort analysis, and drill-down workflows that connect metrics to real user conversations.</p>
<p>In developing AI based insights on large scale multi modal Copilot data part of the Microsoft AI (MAI) organization. We’re looking for a experienced Machine Learning engineer with strong hands-on skills in machine learning, data platforms, and distributed systems to lead the development of AI Insights, a next-generation Copilot analytics product.</p>
<p>The right candidate takes the initiative and enjoys building world-class consumer experiences and products in a fast-paced environment. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities: Build scalable data pipelines for telemetry ingestion, anomaly detection, and cohort segmentation. Implement ML-driven insights (prompted classifiers, anomaly detection) and integrate them into dashboards and APIs. Develop secure, compliant workflows for handling production logs and conversation data. Enable drill-down capabilities linking quantitative metrics to qualitative evidence for actionable context. Collaborate with PMs and DS to refine hypotheses and deliver intuitive, high-performance interfaces. Own technical strategy for trend detection, cohort analysis, and drill-down workflows linking quantitative metrics to qualitative conversation evidence. Prototype and productionize ML models for anomaly detection and predictive insights. Ensure compliance and security for data handling across telemetry, logs, and conversation datasets. Collaborate with PMs, data scientists, and UX to define roadmap and deliver intuitive, high-impact workflows. Independently write efficient, readable, extensible code and model pipelines. Commit to a customer-oriented focus by acknowledging customer needs and perspectives, validating customer perspectives, focusing on broader customer context, and serving as a trusted advisor. Hands-on with observability (metrics, tracing, logs) and model evaluation frameworks.</p>
<p>Qualifications: Required Qualifications: Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Preferred Qualifications: Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Proven experience leading small engineering and machine learning teams, and collaborating effectively with cross-functional stakeholders including product managers, UX designers, and security specialists. Demonstrated interest in Responsible AI.</p>
<p>#MicrosoftAI #mai-datainsights #mai-datainsights Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year. Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Machine Learning, Data Platforms, Distributed Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/machine-learning-software-engineer-4/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>a8f02572-a83</externalid>
      <Title>Data &amp; AI Platform Architect (Professional Services)</Title>
      <Description><![CDATA[<p>You will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines</li>
<li>Documentation and white-boarding skills</li>
<li>Experience working with clients and managing conflicts</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
<li>Travel to customers 10% of the time</li>
</ul>
<p>About Databricks:</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8462016002</Applyto>
      <Location>Amsterdam, Netherlands</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7810e9fe-99e</externalid>
      <Title>Machine Learning Software Engineer</Title>
      <Description><![CDATA[<p>As a Member of Technical Staff – Software Engineer &amp; Machine Learning, you will work building AI Insights, a Copilot analytics product that enables our internal stakeholders to move from “What happened?” to “Why did it happen?” in minutes. You’ll design and implement AI-driven trend detection, cohort analysis, and drill-down workflows that connect metrics to real user conversations.</p>
<p>In developing AI based insights on large scale multi modal Copilot data part of the Microsoft AI (MAI) organization. We’re looking for a experienced Machine Learning engineer with strong hands-on skills in machine learning, data platforms, and distributed systems to lead the development of AI Insights, a next-generation Copilot analytics product. The right candidate takes the initiative and enjoys building world-class consumer experiences and products in a fast-paced environment.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities: Build scalable data pipelines for telemetry ingestion, anomaly detection, and cohort segmentation. Implement ML-driven insights (prompted classifiers, anomaly detection) and integrate them into dashboards and APIs. Develop secure, compliant workflows for handling production logs and conversation data. Enable drill-down capabilities linking quantitative metrics to qualitative evidence for actionable context. Collaborate with PMs and DS to refine hypotheses and deliver intuitive, high-performance interfaces. Own technical strategy for trend detection, cohort analysis, and drill-down workflows linking quantitative metrics to qualitative conversation evidence. Prototype and productionize ML models for anomaly detection and predictive insights. Ensure compliance and security for data handling across telemetry, logs, and conversation datasets. Collaborate with PMs, data scientists, and UX to define roadmap and deliver intuitive, high-impact workflows. Independently write efficient, readable, extensible code and model pipelines. Commit to a customer-oriented focus by acknowledging customer needs and perspectives, validating customer perspectives, focusing on broader customer context, and serving as a trusted advisor. Hands-on with observability (metrics, tracing, logs) and model evaluation frameworks.</p>
<p>Qualifications: Required Qualifications: Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Preferred Qualifications: Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Proven experience leading small engineering and machine learning teams, and collaborating effectively with cross-functional stakeholders including product managers, UX designers, and security specialists. Demonstrated interest in Responsible AI.</p>
<p>#MicrosoftAI #mai-datainsights #mai-datainsights</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Machine Learning, Data Platforms, Distributed Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/machine-learning-software-engineer-6/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>925adf3c-bc6</externalid>
      <Title>Machine Learning Engineering Manager - AI Insights</Title>
      <Description><![CDATA[<p>As a Machine Learning Engineering Manager, you will work on developing AI-based insights on large-scale multi-modal Copilot data part of the Microsoft AI (MAI) organization.</p>
<p>We&#39;re looking for an experienced Engineering Manager with solid hands-on skills in machine learning, data platforms, and distributed systems to lead the development of AI Insights, a next-generation Copilot analytics product.</p>
<p>We&#39;re looking for someone with experience leading product development leveraging data pipelines, data science, and machine learning, as well as a solid communicator and great teammate.</p>
<p>The right candidate takes the initiative and enjoys building world-class consumer experiences and products in a fast-paced environment.</p>
<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more.</p>
<p>As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</p>
<p>Each day, we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and mentor a team of engineers building AI-powered analytics for Copilot usage and quality metrics.</li>
</ul>
<ul>
<li>Own technical strategy for trend detection, cohort analysis, and drill-down workflows linking quantitative metrics to qualitative conversation evidence.</li>
</ul>
<ul>
<li>Prototype and productionize ML models for anomaly detection and predictive insights.</li>
</ul>
<ul>
<li>Ensure compliance and security for data handling across telemetry, logs, and conversation datasets.</li>
</ul>
<ul>
<li>Collaborate with PMs, data scientists, and UX to define roadmap and deliver intuitive, high-impact workflows.</li>
</ul>
<ul>
<li>Drive integration with existing platforms (Azure Data Bricks and other Microsoft internal systems) and ensure reliability, scalability, and cost efficiency.</li>
</ul>
<ul>
<li>Have solid experience in ML systems, anomaly detection, and large-scale data processing.</li>
</ul>
<ul>
<li>Generalize machine learning (ML) solutions into repeatable frameworks.</li>
</ul>
<ul>
<li>Independently write efficient, readable, extensible code and model pipelines.</li>
</ul>
<ul>
<li>Commit to a customer-oriented focus by acknowledging customer needs and perspectives, validating customer perspectives, focusing on broader customer context, and serving as a trusted advisor.</li>
</ul>
<ul>
<li>Be hands-on with observability (metrics, tracing, logs) and model evaluation frameworks.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<ul>
<li>Proven experience leading small engineering and machine learning teams, and collaborating effectively with cross-functional stakeholders including product managers, UX designers, and security specialists.</li>
</ul>
<ul>
<li>Experience writing production-quality Python or Java code.</li>
</ul>
<ul>
<li>Demonstrated interest in Responsible AI.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>Machine Learning, Data Platforms, Distributed Systems, Python, Java, C++, C#, JavaScript, Azure Data Bricks, Responsible AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/machine-learning-engineering-manager-ai-insights-4/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9c1e9acc-5d9</externalid>
      <Title>Enterprise Account Executive, New Business</Title>
      <Description><![CDATA[<p>Want to help solve the world&#39;s toughest problems with data and AI? As an Enterprise Account Executive, you will be responsible for helping customers in Italy to unlock the value of Databricks&#39; Data Intelligence Platform.</p>
<p>You will assess your territory and develop a successful execution strategy, using a solution-based approach to selling and creating value for new logo accounts. You will identify and close quick, small wins while managing longer, complex sales cycles, and track all customer details in Salesforce.</p>
<p>To succeed in this role, you will need to understand the data platform and cloud ecosystems, have exposure to the software industry and a good knowledge of selling SaaS, Data, and Business Value. You will also need to be competent with prospecting research and able to map out key stakeholders, adept in selling to technical buyers, and familiar with MEDDPICC.</p>
<p>In return, you will have the opportunity to work with a talented team, promote the value of the Databricks&#39; Data Intelligence Platform, and ensure 100% satisfaction among all customers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platform, cloud ecosystems, software industry, SaaS, Data, Business Value, prospecting research, MEDDPICC</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks operates at the leading edge of the Data and AI space, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8509691002</Applyto>
      <Location>Milan, Italy</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f428480b-492</externalid>
      <Title>Staff Technical Program Manager- Unity Catalog</Title>
      <Description><![CDATA[<p>P-1489</p>
<p><strong>Platform &amp; Product Experiences | Shape How Databricks Executes at Scale</strong></p>
<p>At Databricks, Staff TPMs don’t just run programs , they define how the company executes at scale. This role sits at the centre of our highest-priority platform and product investments, partnering across engineering, product, and go-to-market teams to bring foundational capabilities to market globally.</p>
<p>You will lead complex, high-visibility programs where the path isn’t fully defined, align senior stakeholders, and build the operating models that scale execution across the company.</p>
<p><strong>The Impact You’ll Make</strong></p>
<p>You will own delivery of some of Databricks’ most important initiatives and programs that reach tens of thousands of enterprise customers worldwide. You will influence roadmap decisions, drive execution across organisations, and ensure launches translate into real customer adoption and business impact.</p>
<p>Examples of programs you may lead include:</p>
<ul>
<li>Driving the evolution and adoption of Unity Catalog as the foundation for data governance across the platform</li>
</ul>
<ul>
<li>Scaling core platform experiences that define how customers interact with Databricks (e.g., workspace, identity, access, and cross-product workflows)</li>
</ul>
<ul>
<li>Leading cross-functional initiatives that unify product experiences across data, AI, and governance capabilities</li>
</ul>
<p><strong>What You’ll Own</strong></p>
<ul>
<li>End-to-End Program Leadership: Own complex, cross-functional programs from initial scoping through launch and adoption. Define program structure, drive execution, and hold teams accountable to clear outcomes.</li>
</ul>
<ul>
<li>Cross-Organizational Alignment: Align engineering, product, design, field, legal, and marketing around a shared plan. Manage dependencies, resolve conflicts, and keep execution on track.</li>
</ul>
<ul>
<li>Product Launch &amp; Enterprise Adoption: Partner with field teams, solutions architects, and customer success to drive successful launches. Build early access programs, capture customer feedback, and translate it into execution priorities.</li>
</ul>
<ul>
<li>Operational Excellence: Identify where the organisation is losing speed, design scalable processes, and drive adoption across teams. Build systems that outlast individual programs.</li>
</ul>
<ul>
<li>Executive Communication: Own communication with senior leadership. Provide clear updates, highlight risks, and enable fast, well-informed decision-making.</li>
</ul>
<ul>
<li>Data-Driven Execution: Define success metrics upfront. Track progress rigorously and use data to guide decisions and demonstrate impact.</li>
</ul>
<p><strong>What We’re Looking For</strong></p>
<ul>
<li>10+ years leading large-scale, cross-functional programs in enterprise software or B2B technology</li>
</ul>
<ul>
<li>Proven experience delivering end-to-end product launches across multiple geographies and functions</li>
</ul>
<ul>
<li>Demonstrated ability to bring structure to ambiguous, fast-moving environments</li>
</ul>
<ul>
<li>Experience influencing roadmap and prioritisation, not just delivery</li>
</ul>
<ul>
<li>Credibility with both engineers and executives; strong communication skills with VP and C-level stakeholders</li>
</ul>
<ul>
<li>Track record of building scalable processes and operating models</li>
</ul>
<ul>
<li>Strong instincts for risk management, escalation, and stakeholder alignment</li>
</ul>
<ul>
<li>Experience defining and tracking success metrics; familiarity with SQL or dashboards is a plus</li>
</ul>
<ul>
<li>Experience operating with high autonomy and ownership in ambiguous, high-stakes environments</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Experience with cloud platforms (AWS, Azure, GCP)</li>
</ul>
<ul>
<li>Background in data platforms, governance systems, or developer-facing products</li>
</ul>
<ul>
<li>Familiarity with Databricks or similar large-scale data ecosystems</li>
</ul>
<ul>
<li>Experience scaling both 0→1 programs and mature systems</li>
</ul>
<ul>
<li>Advanced degree in a technical field</li>
</ul>
<p><strong>Why This Role is Unique</strong></p>
<p>This role sits at the centre of Databricks’ core platform investments, with direct access to senior leadership and the opportunity to shape how the company executes. You will work on high-impact programs, influence key decisions, and build systems that scale across the organisation.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Comprehensive health coverage (medical, dental, vision)</li>
</ul>
<ul>
<li>401(k) plan</li>
</ul>
<ul>
<li>Equity awards</li>
</ul>
<ul>
<li>Flexible time off</li>
</ul>
<ul>
<li>Paid parental leave and family planning support</li>
</ul>
<ul>
<li>Gym reimbursement</li>
</ul>
<ul>
<li>Annual personal development fund</li>
</ul>
<ul>
<li>Work headphones reimbursement</li>
</ul>
<ul>
<li>Employee Assistance Program (EAP)</li>
</ul>
<ul>
<li>Business travel accident insurance</li>
</ul>
<ul>
<li>Mental wellness resources</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilising the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range$180,200-$247,850 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Local Pay Range $180,200-$247,850 USD</Salaryrange>
      <Skills>technical program management, cross-functional programs, data governance, cloud platforms, data platforms, governance systems, developer-facing products, large-scale data ecosystems, scalable processes, operating models, risk management, escalation, stakeholder alignment, success metrics, SQL, dashboards, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8521198002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>a3618001-1b4</externalid>
      <Title>GTM Architect</Title>
      <Description><![CDATA[<p>The GTM Architect, Enterprise will report to the Head of Enterprise GTM and will own the operating systems, measurement, and engagement model that power Scale AI&#39;s enterprise revenue motion.</p>
<p>This role is responsible for designing and running the enterprise GTM architecture across RevOps, sales performance, and cross-functional execution, ensuring Scale&#39;s largest and most strategic accounts are engaged with rigor, consistency, and impact.</p>
<p>The GTM Architect will bring a strong point of view on how Scale engages enterprise accounts, how performance is measured, and how teams operate day-to-day to drive predictable growth. Over time, this role will have the opportunity to build and lead a GTM / RevOps team.</p>
<p>Responsibilities:</p>
<ul>
<li>Own and evolve Scale&#39;s enterprise GTM operating model, including account engagement strategy, funnel design, and sales execution standards</li>
</ul>
<ul>
<li>Design and manage the enterprise RevOps systems stack (e.g., CRM, forecasting, reporting, planning tools) to support scalable growth</li>
</ul>
<ul>
<li>Define, track, and operationalize core enterprise GTM metrics across pipeline health, deal velocity, forecast accuracy, and rep productivity</li>
</ul>
<ul>
<li>Establish and run sales operating cadences including pipeline reviews, forecast calls, QBRs, and performance reviews</li>
</ul>
<ul>
<li>Partner with Sales Leadership and Finance to design, implement, and maintain enterprise sales compensation plans, including quota governance and attainment reporting</li>
</ul>
<ul>
<li>Build executive-level dashboards and reporting to support GTM decision-making and leadership visibility</li>
</ul>
<ul>
<li>Serve as a strategic thought partner to Enterprise Sales leaders, bringing a strong opinion on how accounts should be covered, prioritized, and engaged</li>
</ul>
<ul>
<li>Act as the connective tissue across Sales, Marketing, Finance, Product, and Solutions Engineering for enterprise GTM planning and execution</li>
</ul>
<ul>
<li>Support annual and quarterly planning efforts including territory design, capacity modeling, headcount planning, and quota setting</li>
</ul>
<ul>
<li>Ensure data integrity, process clarity, and operational discipline across all enterprise GTM motions</li>
</ul>
<p>Ideally, You Will Have:</p>
<ul>
<li>8–12+ years of experience in RevOps, Sales Ops, or GTM Strategy roles, with deep exposure to enterprise sales environments</li>
</ul>
<ul>
<li>Experience supporting complex, multi-stakeholder enterprise sales motions in high-growth B2B SaaS or platform companies</li>
</ul>
<ul>
<li>Proven ownership of sales systems, forecasting, and performance measurement at scale</li>
</ul>
<ul>
<li>Hands-on experience designing and managing enterprise sales compensation plans and reporting</li>
</ul>
<ul>
<li>Strong understanding of enterprise GTM metrics, planning cycles, and operating cadences</li>
</ul>
<ul>
<li>A clear point of view on how enterprise accounts should be engaged and how to operationalize that engagement at scale</li>
</ul>
<ul>
<li>Comfort influencing senior sales leaders and executives without direct authority</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills, including experience building executive-level materials and dashboards</li>
</ul>
<ul>
<li>Strong command of GTM systems and tools (e.g., Salesforce, Clari, planning and reporting tools)</li>
</ul>
<ul>
<li>High attention to detail paired with the ability to operate at a strategic altitude</li>
</ul>
<ul>
<li>Demonstrated ability to operate as a senior IC with the ambition and capability to build and lead a team over time</li>
</ul>
<ul>
<li>Technical curiosity or experience working alongside technical products and teams; familiarity with AI, ML, or data platforms is a plus</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>Please reference the job posting&#39;s subtitle for where this position will be located. For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $176,000-$220,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$176,000-$220,000 USD</Salaryrange>
      <Skills>RevOps, Sales Ops, GTM Strategy, CRM, Forecasting, Reporting, Planning Tools, Sales Compensation Plans, Quota Governance, Attainment Reporting, Executive-Level Dashboards, Leadership Visibility, Strategic Thought Partner, Enterprise Sales Leaders, Account Engagement Strategy, Funnel Design, Sales Execution Standards, Data Integrity, Process Clarity, Operational Discipline, AI, ML, Data Platforms, Technical Products, GTM Systems, Tools, Salesforce, Clari, Planning and Reporting Tools</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4662232005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6365e7d7-511</externalid>
      <Title>Senior Forward Deployed Data Scientist/Engineer</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Senior Forward Deployed Data Scientist / Engineer to work directly with customers on ambiguous, high-impact problems at the intersection of data science, product development, and AI deployment.</p>
<p>This is not a traditional analytics role. On this team, data scientists do the core statistical and modeling work, but they also build real tools and products: evaluation explorers, operator workflows, decision-support systems, experimentation surfaces, and customer-specific AI/data applications that get used in production.</p>
<p>The right candidate is strong in first-principles problem solving, rigorous measurement, and technical execution. They know how to define metrics, design experiments, diagnose failures, and build systems that people actually use. They are also comfortable using modern AI-assisted development tools to prototype and iterate quickly without sacrificing reliability, observability, or judgment. Python and SQL matter in this role, but as execution fluency in service of building better products and making better decisions.</p>
<p>Responsibilities: Partner directly with enterprise customers to understand workflows, operational pain points, constraints, and success criteria Turn ambiguous business and product problems into measurable solutions with clear metrics, technical designs, and deployment plans Design and build internal and customer-facing data products, including evaluation tools, workflow applications, decision-support systems, and thin product layers on top of data/ML systems Build end-to-end solutions across data ingestion, transformation, experimentation, statistical modeling, deployment, monitoring, and iteration Design evaluation frameworks, benchmarks, and feedback loops for ML/LLM systems, human-in-the-loop workflows, and model-assisted operations Apply rigorous statistical thinking to experimentation, causal inference, metric design, forecasting, segmentation, diagnostics, and performance measurement Use AI-assisted development workflows to accelerate prototyping and product iteration, while maintaining strong engineering discipline Diagnose failure modes across data quality, model behavior, retrieval, workflow design, and user experience, and drive fixes into production Act as the voice of the customer to Product, Engineering, and Data Science, using field learnings to shape roadmap and platform capabilities</p>
<p>Requirements: 5+ years of experience in data science, machine learning, quantitative engineering, or another highly analytical technical role Proven track record of shipping data, ML, or AI systems that delivered measurable business or product impact Exceptional ability to structure ambiguous problems, define the right success metrics, and translate them into executable technical plans Strong foundation in statistics, experimentation, causal reasoning, and measurement Experience building tools or products, not just analyses , for example internal workflow tools, evaluation systems, operator-facing products, experimentation platforms, or customer-specific applications Hands-on fluency in Python, SQL, and modern data/AI tooling; able to inspect data, prototype quickly, debug deeply, and productionize solutions that work Comfort using AI-assisted coding and development workflows to move from idea to usable product quickly Strong communication and stakeholder management skills; able to work effectively with customers, engineers, product teams, and executives High ownership and bias toward shipping in fast-moving environments with incomplete information</p>
<p>Preferred qualifications: Experience in a forward deployed, solutions, consulting, or other client-facing technical role Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</p>
<p>What success looks like: Success in this role means taking a messy, high-stakes customer problem and turning it into a deployed system that is actually used. Sometimes that system is a model. Sometimes it is an evaluation framework. Sometimes it is an operator-facing tool or a lightweight data product that changes how decisions get made. In all cases, success is defined by measurable impact, rigorous evaluation, and reliable execution.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>Salary Range: $167,200-$209,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$167,200-$209,000 USD</Salaryrange>
      <Skills>Python, SQL, Modern data/AI tooling, Statistics, Experimentation, Causal reasoning, Measurement, Data science, Machine learning, Quantitative engineering, Experience in a forward deployed, solutions, consulting, or other client-facing technical role, Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products, Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow, Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery, Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems, Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling, Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4636227005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b68ff4cc-e74</externalid>
      <Title>Data Engineer, Safeguards</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic is looking for a Data Engineer to join the Safeguards team and build the data foundations that keep our AI systems safe. The Safeguards team works to monitor models, prevent misuse, and ensure user well-being.</p>
<p>You&#39;ll design and build the data pipelines, warehousing solutions, and analytical tooling that power our safety and trust efforts at scale. You&#39;ll work closely with engineers, data scientists, and policy teams to ensure the Safeguards organization has the data it needs to detect abuse patterns, measure the effectiveness of safety interventions, and make informed decisions about model behavior and enforcement.</p>
<p>This is a high-impact role where your work will directly support Anthropic&#39;s mission to develop AI that is safe and beneficial.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and maintain scalable data pipelines that support safety monitoring, abuse detection, and enforcement workflows</li>
<li>Develop and optimize data models and warehousing solutions to enable efficient analysis of large-scale usage and safety data</li>
<li>Build and maintain dashboards and reporting infrastructure that give Safeguards teams visibility into model behavior, misuse patterns, and enforcement outcomes</li>
<li>Collaborate with engineers to integrate data from multiple sources , including model outputs, user reports, and automated classifiers , into a unified analytical layer</li>
<li>Implement data quality frameworks, monitoring, and alerting to ensure the reliability of safety-critical data</li>
<li>Partner with research teams to surface data insights that inform model improvements and safety interventions</li>
<li>Develop self-service data tooling that enables stakeholders to explore safety data and generate reports independently</li>
<li>Contribute to data governance practices, including access controls, retention policies, and privacy-compliant data handling</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 3+ years of experience in data engineering, analytics engineering, or a related role</li>
<li>Are proficient in SQL and Python, with experience building and maintaining ETL/ELT pipelines</li>
<li>Have hands-on experience with modern data stack tools such as dbt, Airflow, Spark, or similar orchestration and transformation frameworks</li>
<li>Have worked with cloud data platforms (BigQuery, Redshift, Snowflake, or similar)</li>
<li>Are comfortable building dashboards and data visualizations using tools like Looker, Tableau, or Metabase</li>
<li>Communicate clearly and can translate complex data concepts for both technical and non-technical audiences</li>
<li>Are results-oriented, flexible, and willing to pick up slack even when it falls outside your job description</li>
<li>Care about the societal impacts of AI and are motivated by safety work</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience with trust &amp; safety, integrity, fraud, or abuse detection data systems</li>
<li>Experience with large-scale event streaming systems (Kafka, Pub/Sub, Kinesis)</li>
<li>Built data infrastructure that supports ML model monitoring or evaluation</li>
<li>A background in statistical analysis, or experience collaborating closely with data scientists</li>
<li>Developed internal tooling or self-service analytics platforms</li>
</ul>
<p><strong>Strong candidates need not have:</strong></p>
<ul>
<li>A formal degree in Computer Science or a related field , we value practical experience and demonstrated ability over credentials</li>
<li>Prior experience in AI or machine learning , you&#39;ll learn the domain-specific context on the job</li>
<li>Previous experience at an AI safety or research organization</li>
<li>Deep expertise across every tool listed above , familiarity with a subset and a willingness to learn is enough</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£170,000-£220,000 GBP</Salaryrange>
      <Skills>SQL, Python, ETL/ELT pipelines, dbt, Airflow, Spark, cloud data platforms, BigQuery, Redshift, Snowflake, Looker, Tableau, Metabase</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156057008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>04c1ff49-2d1</externalid>
      <Title>Data Platform Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Data Platform Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 10% of the time</li>
</ul>
<p>[Preferred] Databricks Certification but not essential</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, technical project delivery, documentation and white-boarding skills, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8396801002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>273f3f27-7de</externalid>
      <Title>Staff Product Manager, Content Experience</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Product Manager to lead our Content Experience strategy. In this role, you will own how users discover, learn from, and act on content: across documentation, in-product help, AI-assisted guidance, and beyond.</p>
<p>You&#39;ll help define the future of content at Databricks by making it a first-class product experience and integrating content development into product development.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning the content experience end-to-end, ensuring it&#39;s helpful, intuitive, and actionable</li>
<li>Driving strategic improvements to content tooling and workflows</li>
<li>Building an architecture of participation that enables context experts to contribute directly to content</li>
<li>Integrating AI to transform content experiences</li>
<li>Defining metrics that matter and tracking content engagement, time-to-task, support deflection, and user satisfaction</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>7+ years of product management experience, with a proven track record of leading cross-functional initiatives and delivering high-impact user experiences</li>
<li>Deep understanding of developer tools, data platforms, or technical products with large surface areas</li>
<li>Strong systems mindset, comfortable designing scalable workflows, content architectures, and tooling integrations</li>
<li>Experience with developer documentation, content platforms, or product onboarding is a plus</li>
<li>Strong customer empathy and an obsession with helping users succeed</li>
<li>Familiarity with AI technologies (especially LLMs) and how they can be applied to content workflows and user guidance</li>
<li>Experience working with technical and non-technical contributors in a collaborative content ecosystem</li>
</ul>
<p>Pay Range Transparency: Databricks is committed to fair and equitable compensation practices. The pay range for this role is $181,700-$249,800 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$181,700-$249,800 USD</Salaryrange>
      <Skills>Product Management, Content Strategy, AI Technologies, Developer Tools, Data Platforms, Technical Products, LLMs, Content Platforms, Product Onboarding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8040989002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5b244f27-9fd</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases. You will work with engagement managers to scope variety of professional services work with input from the customer.</p>
<p>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications. Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</p>
<p>Provide an escalated level of support for customer operational issues. You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</p>
<p>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</p>
<p>The ideal candidate will have 6+ years experience in data engineering, data platforms &amp; analytics, comfortable writing code in either Python or Scala, working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one, deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals, familiarity with CI/CD for production deployments, working knowledge of MLOps, design and deployment of performant end-to-end data architectures, experience with technical project delivery - managing scope and timelines, documentation and white-boarding skills, experience working with clients and managing conflicts, build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</p>
<p>Travel to customers 20% of the time.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461258002</Applyto>
      <Location>Raleigh, North Carolina</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0fe57d9a-28e</externalid>
      <Title>Engagement Manager</Title>
      <Description><![CDATA[<p>Job Title: Engagement Manager</p>
<p>We are seeking an experienced Engagement Manager to join our team in Tokyo. As an Engagement Manager, you will be responsible for driving customer success by ensuring that our customers are making the most value of our products and services.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Collaborate with sales counterparts to understand customer needs and develop valued solutions</li>
<li>Identify opportunities for new services and articulate the business value</li>
<li>Perform as the Engagement Manager in the assigned area with full accountability for meeting/exceeding Professional Services and Training bookings and revenue targets</li>
<li>Consult with clients to understand and analyze engagement scope, requirements, time, cost, and benefits</li>
<li>Drive resolution of delivery challenges, address resource contentions, scoping issues, and manage expectations</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Strong fundamental knowledge around Big Data Platforms Implementation from both the technology, operations, and security/governance lenses</li>
<li>Proven experience in selling services offerings at either implementation, advisory, education, and change management capacity</li>
<li>Senior customer-facing roles that require a mix of influencing, validating, negotiating, understanding, and execution to both business and technology audiences</li>
<li>Consistent track record of identifying customer needs and successfully implementing solutions</li>
<li>Owning projects/programs in agile scrum/kanban delivery methodology as well as waterfall methodology</li>
<li>Strong problem-solving skill about customer&#39;s pain points by using modern technologies</li>
<li>Excellence in presentation skills, providing proposals that enforce good project governance and drive scalable delivery practices to both internal and external executives</li>
<li>High-level orchestration skills to align both internal and external stakeholders when proposing large initiatives</li>
<li>Strong service delivery and program management skills with the ability to synthesize customer success outcomes into well-structured program plans that deliver against such outcomes</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Prior experience in project/program proposal to customers at Consulting, SI, Software/Cloud Vendor</li>
<li>Bachelor&#39;s degree in Computer Science or related educational background</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive benefits and perks that meet the needs of all employees</li>
</ul>
<p>Commitment to Diversity and Inclusion:</p>
<ul>
<li>Databricks is committed to fostering a diverse and inclusive culture where everyone can excel</li>
</ul>
<p>Compliance:</p>
<ul>
<li>Access to export-controlled technology or source code is required for performance of job duties, and it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Platforms Implementation, Customer Success, Project Management, Program Management, Agile Scrum/Kanban Delivery Methodology, Waterfall Methodology, Problem-Solving Skill, Presentation Skills, Project Governance, Service Delivery, Prior Experience in Project/Program Proposal to Customers at Consulting, SI, Software/Cloud Vendor, Bachelor&apos;s Degree in Computer Science or Related Educational Background</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8501186002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f1950023-ef7</externalid>
      <Title>Senior Engineering Manager, Activation</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Brex’s AI-native automation and world-class service eliminate manual expense and accounting tasks for customers so they can focus on what matters most. Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry. We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream. We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p>Engineering</p>
<p>Engineering at Brex is about building systems that scale with speed and intention. Our teams span Software, Data, Security, and IT, and operate with high autonomy and deep collaboration. We tackle hard technical problems, own our outcomes, and push for excellence at every level , from architecture to deployment. It’s an environment where engineering is a craft, and builders become leaders.</p>
<p>What you’ll do</p>
<p>You will lead an engineering group focused on building the systems and product experiences that power customer activation at Brex, including onboarding, account setup, verifications, integrations, and implementation workflows that help customers realize value quickly. This role requires strategic thinking, operational excellence, technical leadership, and a deep passion for delivering frictionless, AI-enhanced customer journeys.</p>
<p>The ideal candidate is a seasoned engineering leader with experience scaling user-facing onboarding systems, delivering high-quality product experiences, and partnering deeply across Product, Design, Operations, and GTM teams.</p>
<p>Where you’ll work</p>
<p>This role will be based in our New York office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities</p>
<ul>
<li>Take an active role in driving business and product strategies, championing a seamless, intuitive, and efficient onboarding and implementation experience.</li>
</ul>
<ul>
<li>Collaborate with cross-functional partners across Product, Design, Operations, and Sales to define priorities and deliver delightful customer activation experiences.</li>
</ul>
<ul>
<li>Leverage AI to reimagine and automate onboarding and implementation workflows, improving speed, personalization, and operational leverage.</li>
</ul>
<ul>
<li>Drive execution of the Activation roadmap, ensuring timely, high-quality delivery of systems and features that help customers activate and realize value.</li>
</ul>
<ul>
<li>Lead and manage multiple teams of engineers, including hiring, mentoring, performance management, and establishing strong technical direction.</li>
</ul>
<ul>
<li>Build systems that integrate identity verification, KYC and compliance workflows, customer data ingestion, and implementation tooling in a scalable and reliable manner.</li>
</ul>
<ul>
<li>Drive continuous improvement in engineering processes, technical architecture, and product quality.</li>
</ul>
<ul>
<li>Foster a culture of innovation, collaboration, accountability, and customer obsession across the team.</li>
</ul>
<p>Requirements</p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>
</ul>
<ul>
<li>Strong technical background and understanding of software development principles.</li>
</ul>
<ul>
<li>Expertise leading full-stack engineering teams delivering end-to-end product experiences.</li>
</ul>
<ul>
<li>Demonstrated track record of shipping customer-facing features across multiple release cycles.</li>
</ul>
<ul>
<li>3+ years of experience managing or leading multiple technical teams in a high-growth environment.</li>
</ul>
<ul>
<li>Regularly works with cross-functional partners (e.g. Product, Design, Operations, Sales) and excels in driving alignment across stakeholders.</li>
</ul>
<ul>
<li>Experience building systems related to onboarding, implementation, identity, workflow automation, customer lifecycle products, or other customer facing experiences.</li>
</ul>
<ul>
<li>Data-driven mindset with the ability to evaluate impact, measure funnel performance, and optimize activation metrics.</li>
</ul>
<ul>
<li>Track record building AI-powered product experiences, including LLM-driven automation and personalization.</li>
</ul>
<p>Bonus points</p>
<ul>
<li>Experience with data platforms such as Snowflake, Hex, or similar.</li>
</ul>
<ul>
<li>You have started your own technology venture or were an early technical founder/employee. We value entrepreneurial spirit &amp; scrappiness!</li>
</ul>
<ul>
<li>You are a champion for the customer and constantly put yourself in their shoes to create intuitive, frictionless experiences.</li>
</ul>
<p>Compensation</p>
<p>The expected salary range for this role is $300,000 - $375,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000 - $375,000</Salaryrange>
      <Skills>Technical leadership, Software development principles, Full-stack engineering, Customer-facing features, Data-driven mindset, AI-powered product experiences, LLM-driven automation, Personalization, Data platforms, Snowflake, Hex, Entrepreneurial spirit, Scrappiness, Customer obsession</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial platform that provides corporate cards and banking services to companies in over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8330492002</Applyto>
      <Location>New York, New York, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>22bcbb50-ef4</externalid>
      <Title>Member of Technical Staff - Data Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Data Platform team at xAI builds and operates the infrastructure responsible for all large-scale data transport and processing across the company.</p>
<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>
<li>Scale and optimise multi-tenant Kafka infrastructure supporting real-time workloads.</li>
<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>
<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>
<li>Debug and optimise distributed systems, with a focus on reliability and performance under load.</li>
<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>
<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>
<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>
<li>Strong debugging, profiling, and performance optimisation skills.</li>
<li>Track record of shipping and maintaining critical infrastructure.</li>
<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, distributed systems, stream processing, large-scale data platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.x.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803862007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>477d343e-e37</externalid>
      <Title>Customer Success Architect</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence. Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees.</p>
<p>About the Customer Success Team:</p>
<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customers’ business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>
<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customers’ organizations, help the customer manage change, execute on technical projects and services that delight our customers, and ultimately drive ROI on the customer’s Mixpanel investment.</p>
<p>About the Role:</p>
<p>As a CSA, you will partner with customers throughout the customer journey to understand what drives value, beginning from the pre-sales running proof of concepts to demonstrate quick time to value, to post-sales onboarding and implementation, where you set customers up for long-term success with scalable implementation and data governance best practices. Throughout the entire customer lifecycle, you will work to understand how analytics can drive business value for your customers and will consult them on how to maximize the value of Mixpanel, including managing change during Mixpanel’s rollout, defining and achieving ROI, and identifying areas of improvement in their current usage of analytics.</p>
<p>For large enterprise customers, post onboarding, you will also continue alongside the Account Managers to drive data trust and product adoption for 100+ end user teams through a change management rollout approach.</p>
<p>Responsibilities:</p>
<p>Serve as a trusted technical advisor for prospects/customers to provide strategic consultation on data architecture, governance, instrumentation, and business outcomes</p>
<p>Effectively communicate at most levels of the customer’s organization to influence business outcomes via Mixpanel, design and execute a comprehensive analytics strategy, and unblock technical and organizational roadblocks</p>
<p>Own the customer’s success with Mixpanel , documenting and delivering ROI to the customer throughout their journey to transform their business with self-serve analytics</p>
<p>Own onboarding and data health for your assigned customers/projects, including ongoing enhancements to their data quality and overall tech stack integration</p>
<p>Engage with customers’ engineering, product management, and marketing teams to handle technical onboarding, optimize Mixpanel deployments, and improve data trust</p>
<p>Deliver a variety of technical services ranging from data architecture consultations to adoption and change management best practices</p>
<p>Leverage modern data architecture expertise to create scalable data governance practices and data trust for our customers, including data optimization and re-implementation projects</p>
<p>Successfully execute on success outcomes whilst balancing project timelines, scope creep, and unanticipated issues</p>
<p>Bridge the technical-business gap with your customers , working with business stakeholders to define a strategic vision for Mixpanel and then working with the right business and technical contacts to execute that vision</p>
<p>Collaborate with our technical and solutions partners as needed on data optimization and onboarding projects</p>
<p>Be a technical sponsor for internal engagements with Mixpanel product and engineering teams to prioritize product and systems tasks from clients</p>
<p>We&#39;re Looking For Someone Who Has</p>
<p>3 to 5 years of experience consulting on defining and delivering ROI through new tool implementations</p>
<p>Experience working with Director-level members of the customer organization to define a strategic vision and successfully leveraging those members to deliver on that vision</p>
<p>The ability to communicate with stakeholders at most levels of an organization , from talking with developers about the ins and outs of an API to talking to a Director of Data Science/Product Management about organizational efficiency</p>
<p>Can manage complex projects with assorted client stakeholders, working across teams and departments to execute real change</p>
<p>Has a demonstrated successful record of experience in customer success, client-facing professional services, consulting, or technical project management role</p>
<p>Excellent written, analytical, and communication skills</p>
<p>Strong process and/or project delivery discipline</p>
<p>Eager to learn new technologies and adapt to evolving customer needs</p>
<p>We&#39;d Be Extra Excited For Someone Who Has</p>
<p>Experience in data querying, modeling, and transforming in at least one core tool, including SQL / dbt / Python / Business Intelligence tools / Product Analytics tools, etc.</p>
<p>Familiar with databases and cloud data warehouses like Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, etc.</p>
<p>Familiar with product analytics implementation methods like SDKs, Customer Data Platforms (CDPs), Event Streaming, Reverse ETL, etc.</p>
<p>Familiar with analytics best practices across business segments and verticals</p>
<p>Benefits and Perks</p>
<p>Comprehensive Medical, Vision, and Dental Care</p>
<p>Mental Wellness Benefit</p>
<p>Generous Vacation Policy &amp; Additional Company Holidays</p>
<p>Enhanced Parental Leave</p>
<p>Volunteer Time Off</p>
<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>
<p>Culture Values</p>
<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>
<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>
<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>
<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>
<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>
<p>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</p>
<p>Why choose Mixpanel?</p>
<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>
<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>
<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>
<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>
<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>
<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>
<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, or any other protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data architecture, governance, instrumentation, business outcomes, data querying, modeling, transforming, SQL, dbt, Python, Business Intelligence tools, Product Analytics tools, databases, cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, SDKs, Customer Data Platforms, Event Streaming, Reverse ETL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a leading provider of digital analytics software, serving over 29,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7506821</Applyto>
      <Location>Bengaluru, India (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>52e9ea6f-e2a</externalid>
      <Title>Enterprise Account Executive</Title>
      <Description><![CDATA[<p>The Enterprise Account Executive will report to the Director of Enterprise GTM and own revenue growth across a portfolio of Scale AI&#39;s largest and most strategic enterprise customers. This role is focused on selling complex, highly technical AI solutions into F500 organisations, partnering with executive, technical, and operational stakeholders to drive long-term value and expansion.</p>
<p>You will be responsible for full-cycle enterprise sales - from prospecting and deal strategy through close, renewal, and expansion - while serving as the quarterback across internal teams including Solutions Engineering, Product, Research, and Operations. This role requires strong ownership, executive presence, and the ability to navigate multi-stakeholder enterprise buying processes in a fast-paced environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Own and drive relationships with Scale&#39;s largest and most complex Fortune 500 prospects and customers</li>
<li>Build trusted relationships with executive, technical, and operational stakeholders across multiple business units</li>
<li>Develop and execute comprehensive account strategies to drive net-new revenue, expansion, and long-term partnerships</li>
<li>Lead strategic deal planning and mutual close plans across new business, renewals, and expansions</li>
<li>Partner closely with Solutions Engineering and Product teams to deliver compelling, technically credible value propositions</li>
<li>Act as the voice of the customer internally, influencing product roadmap, research priorities, and delivery execution</li>
<li>Maintain deep understanding of customer business goals, AI maturity, and industry trends to proactively identify opportunities</li>
<li>Consistently communicate account health, pipeline, and forecast accuracy using Salesforce, Clari, and related tools</li>
</ul>
<p>Ideally, You Will Have:</p>
<ul>
<li>8+ years of enterprise sales or account management experience, including 2+ years selling deeply technical solutions to both business and technical audiences</li>
<li>A proven track record of closing and expanding large, complex enterprise deals</li>
<li>Demonstrated success consistently achieving or exceeding quota in enterprise sales roles</li>
<li>Experience building and executing long-term account strategies to drive sustained revenue growth</li>
<li>Strong ability to lead enterprise renewal processes from strategy through close</li>
<li>Excellent written and verbal communication skills, with comfort presenting to executive audiences</li>
<li>Strong command of enterprise sales processes and systems (Salesforce, Clari, Outreach, Slack)</li>
<li>A consultative, customer-first mindset with the ability to influence cross-functional internal teams</li>
<li>Experience developing executive-level materials and business cases</li>
<li>Strong project management, organisational skills, and attention to detail</li>
<li>Technical background or strong technical curiosity highly valued, especially familiarity with AI, ML, or data platforms</li>
</ul>
<p>Sales Commission:</p>
<p>This role is eligible to earn commissions. Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>Benefits:</p>
<p>Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p>Additional benefits may include a commuter stipend.</p>
<p>Salary Range:</p>
<p>$207,200 - $259,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$207,200 - $259,000 USD</Salaryrange>
      <Skills>Enterprise sales, Account management, Technical sales, Executive presence, Communication skills, Project management, Organisational skills, Attention to detail, AI, ML, Data platforms</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4646946005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b3cf0ff9-4c6</externalid>
      <Title>Support Engineer II</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>
<p>Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees. Mixpanel delivers insights that customers trust.</p>
<p>Visit mixpanel.com to learn more.</p>
<p>About The Support Team</p>
<p>Mixpanel Support is a team of talented problem-solvers from diverse backgrounds. We care deeply about helping our customers be successful and enabling them to get value from their data.</p>
<p>We are located all over the world in San Francisco, Barcelona, London, and Singapore...</p>
<p>About The Role</p>
<p>The right candidate is an avid learner, an advocate for customers, and a collaborative teammate. The main responsibility of a Support Engineer is to help users solve technical challenges and use Mixpanel to make impactful product decisions.</p>
<p>We’ve had team members focus on developing their technical skills to join the product and engineering teams, hone their customer-facing skills to become customer success managers or sales engineers, and take on leadership roles in the Support organization.</p>
<p>Responsibilities</p>
<p>The core responsibility of a Support Engineer is to support our customers at every turn in the Mixpanel journey by providing answers to product questions, sharing best practices, and debugging technical issues.</p>
<p>You&#39;ll also develop your technical skills, collaborate with our Product team to improve our product, learn product analytics, and mentor new team members.</p>
<p>Become a Mixpanel product expert - you will help users understand our reports and features, help them use our APIs and SDKs, share best practices, and resolve account issues</p>
<p>Respond to customer inquiries via Zendesk email, chat, Slack, and phone calls</p>
<p>Investigate and document bugs and feature requests to share with our Product and Engineering teams</p>
<p>Provide feedback regarding internal support processes, product functionality, and customer education resources to improve the customer experience</p>
<p>Shape the product by regularly working closely with PM’s, engineers, and designers to incorporate customer learnings into change</p>
<p>We&#39;re Looking For Someone Who Has</p>
<p>Experience providing customer facing SAAS support (in customer support, professional services, technical account management or similar)</p>
<p>Ability to communicate technical concepts effectively in a clear, friendly writing style</p>
<p>Excellent problem-solving and analytical skills</p>
<p>Programming experience, understanding of web &amp; mobile technologies, and interacting with APIs</p>
<p>Experience with debugging and collaborating with engineering to resolve complex technical issues, especially with JavaScript, Python, or mobile technologies</p>
<p>Ability to be resourceful and resilient when faced with ambiguity and new challenges</p>
<p>Dedication to developing expertise in a complex and constantly evolving product</p>
<p>Interest and aptitude to develop technical skills and learn new technologies</p>
<p>Experience providing SLA based support and/or dedicated support to strategic customers</p>
<p>Speak Hebrew and fluent English</p>
<p>Bonus Points</p>
<p>Experience with Mixpanel or other analytics tools</p>
<p>Familiar with databases and cloud data warehouses like Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, etc.</p>
<p>Familiar with product analytics implementation methods like SDKs, Customer Data Platforms (CDPs), Event Streaming, Reverse ETL, etc.</p>
<p>Benefits and Perks</p>
<p>Comprehensive Medical, Vision, and Dental Care</p>
<p>Mental Wellness Benefit</p>
<p>Generous Vacation Policy &amp; Additional Company Holidays</p>
<p>Enhanced Parental Leave</p>
<p>Volunteer Time Off</p>
<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>
<p>Culture Values</p>
<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>
<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>
<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>
<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>
<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>
<p>Why choose Mixpanel?</p>
<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>
<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>
<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>
<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>
<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>
<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>
<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status.</p>
<p>Pursuant to the San Francisco Fair Chance Ordinance or other similar laws that may be applicable, we will consider for employment qualified applicants with arrest and conviction records.</p>
<p>We’ve immersed ourselves in our Culture and Values as our guiding principles for the impact we want to have and the future we are building.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>customer facing SAAS support, technical concepts, problem-solving, programming experience, web &amp; mobile technologies, APIs, debugging, collaboration, SLA based support, dedicated support, Hebrew, English, Mixpanel, analytics tools, databases, cloud data warehouses, product analytics implementation methods, SDKs, Customer Data Platforms, Event Streaming, Reverse ETL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a digital analytics platform that helps teams accelerate adoption, improve retention, and ship with confidence. It has over 29,000 customers, including Workday, Pinterest, LG, and Rakuten Viber.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7650541</Applyto>
      <Location>Tel Aviv, Israel (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6b0c92b4-c05</externalid>
      <Title>Sr. Manager, Field Engineering</Title>
      <Description><![CDATA[<p>We are looking for a dynamic Sr. Manager, Field Engineering to lead a team of Solution Architects within our Retail vertical. As a key member of our Field Engineering team, you will be responsible for helping customers in the Consumer Goods segment succeed with Databricks and providing outsized value to their businesses. You will also be responsible for maintaining a robust hiring pipeline, establishing relationships across the business, and partnering with sales leadership to hit sales and consumption targets.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Hiring, training, growing, and managing a team of Solutions Architects</li>
<li>Making customers in the Consumer Goods segment successful with Databricks and providing outsized value to their businesses</li>
<li>Maintaining a robust hiring pipeline at all times</li>
<li>Establishing relationships across the business to make customers and team successful</li>
<li>Partnering with sales leadership to hit sales and consumption targets</li>
</ul>
<p>The ideal candidate will have 7+ years of professional experience in the data space with a technical product, 3+ years of experience in the field, architecting and delivering data-driven solutions for major accounts within the Retail, Consumer Products, Travel &amp; Hospitality vertical, and a deep familiarity with the buy-side and supply-side ecosystem.</p>
<p>In addition, the candidate should have demonstrated expertise with data collaboration ecosystem, 3+ years of experience building and leading technical pre-sales teams, a deep technical understanding of the impact that Data + AI can drive within the Retail industry, and trusted advisor to technical executives who guide strategic data infrastructure decisions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,100-$264,175 USD</Salaryrange>
      <Skills>data warehousing, big data, machine learning, data collaboration ecosystem, customer data platforms, clean rooms, data marketplaces</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8362888002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2f962d3f-14e</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461218002</Applyto>
      <Location>Dallas, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>995b724b-c85</externalid>
      <Title>Senior Sales Engineer, Partnerships</Title>
      <Description><![CDATA[<p>We are seeking a Senior Sales Engineer, Partnerships to join our team. As a Senior Sales Engineer, you will be responsible for providing technical expertise and strategic enablement to partners, facilitating strategies to pursue avenues of revenue outside of the life sciences. This role bridges technical knowledge and business strategy, supporting partners during discovery, qualification, and solution design to showcase the value of Komodo&#39;s healthcare data and analytics platform.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Serve as a technical lead on 8-10 multiple strategic opportunities, directly influencing the deal cycles and accelerating revenue growth.</li>
<li>Become the definitive subject matter expert on Komodo&#39;s comprehensive suite of healthcare data assets and platform capabilities.</li>
<li>Garner subject matter expertise and ownership of a segment within the Partnerships / Channel Partnerships organization.</li>
<li>Develop scalable technical frameworks, demo environments, and reusable assets that have set new organizational standards with a heavy emphasis on agentic AI workflows.</li>
<li>Drive cross-functional initiatives by partnering with Product, Data Science, and Engineering to deliver customized, innovative solutions.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of experience in Sales Engineering or Solutions Engineering with a focus on healthcare data and healthcare technology.</li>
<li>Proven track record of understanding and leveraging AI tools to enhance SaaS products or improve operational workflows.</li>
<li>Expertise in healthcare data (e.g., 837/835 transactions, NDC codes) and its practical applications in analytics, reporting, and decision-making.</li>
<li>Strong technical skills, including experience with APIs, data integration, cloud-based architectures (e.g., AWS, Azure), and analyzing large datasets.</li>
<li>An understanding and proficiency of data science techniques, specifically SQL, Python, and/or R.</li>
<li>Excellent communication and presentation skills, with the ability to train partners and translate complex technical concepts for diverse stakeholders.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience working within the provider, payer, or financial service segments.</li>
<li>Technical certifications in AWS, Azure, or data platforms.</li>
<li>Experience with CRM platforms like Salesforce for managing partner and client interactions.</li>
<li>Familiarity with data visualization tools (e.g., Tableau, Looker) to create impactful partner training materials.</li>
<li>Knowledge of identity resolution and privacy-preserving linking technologies.</li>
<li>Prior experience developing joint business plans and co-sell strategies with channel partners.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$143,000-$193,000 USD</Salaryrange>
      <Skills>Sales Engineering, Healthcare Data, Healthcare Technology, AI Tools, APIs, Data Integration, Cloud-Based Architectures, Data Science Techniques, SQL, Python, R, Excellent Communication, Presentation Skills, Experience Working Within Provider, Payer, or Financial Service Segments, Technical Certifications in AWS, Azure, or Data Platforms, Experience with CRM Platforms Like Salesforce, Familiarity with Data Visualization Tools, Knowledge of Identity Resolution and Privacy-Preserving Linking Technologies, Prior Experience Developing Joint Business Plans and Co-Sell Strategies</Skills>
      <Category>Sales</Category>
      <Industry>Healthcare</Industry>
      <Employername>Komodo Health</Employername>
      <Employerlogo>https://logos.yubhub.co/komodohealth.com.png</Employerlogo>
      <Employerdescription>Komodo Health is a healthcare technology company that aims to reduce the global burden of disease by providing a comprehensive view of the US healthcare system.</Employerdescription>
      <Employerwebsite>https://www.komodohealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/komodohealth/jobs/8495825002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0036f074-845</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456966002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3b6af70e-6ba</externalid>
      <Title>Field Engineering, Data Warehousing Product Specialist</Title>
      <Description><![CDATA[<p>As the Data Warehousing Product Specialist in the Field Engineering team, you will be defining and driving our technical go-to-market strategy in the Asia Pacific and Japan region.</p>
<p>You will be directly working with customers to guide and influence their data warehousing architecture and decisions. You will be acting as a trusted advisor for senior executives and your in-depth technical knowledge will ensure our customers are successful in leveraging Databricks to solve their business problems.</p>
<p>You will be the technical expert to support our field engineering teams internally and you will be expected to help enable the team to understand the key differentiators of our product against our competitors.</p>
<p>You will partner with the Product Manager(s) to help to define the product direction based on local knowledge and inform our product strategy with our go-to-market field teams.</p>
<p>You will not have any direct reports but will recruit and lead a group of specialists across the field dedicated to scale your impact.</p>
<p>You will also be a thought leader externally to the market via speaking at conferences, online webinars, and blog posts.</p>
<p>You will meet with customers to communicate the vision and gather feedback.</p>
<p>You have expertise in cloud-based data warehousing technologies, modern data platform architectures, traditional data warehousing techniques, and preferably industry domain knowledge.</p>
<p>You will excel in creating and articulating a compelling value proposition for our customers and enabling Account Executives and Field Engineers to operate effectively using best practices and assets that you own and develop.</p>
<p>The impact you will have:</p>
<ul>
<li>Lead the vision and strategy for Data Warehousing Adoption in Asia Pacific and Japan</li>
</ul>
<ul>
<li>Develop materials for our GTM team to effectively communicate our value proposition</li>
</ul>
<ul>
<li>Support our field teams in key strategic pursuit opportunities</li>
</ul>
<ul>
<li>Lead a team of subject matter experts (SMEs) to scale your impact in local regions and enable the broader field team to have confidence in competing in the market</li>
</ul>
<ul>
<li>Act as the thought leader to build confidence and relationships with our customers at the executive level</li>
</ul>
<ul>
<li>Present externally at conferences, events, and webinars and publish content to establish our position in the market</li>
</ul>
<ul>
<li>Work with post sales and partners to establish migration best practices</li>
</ul>
<ul>
<li>Act as a cross-functional representative with Product Management, Product Marketing, and Engineering on our go-to-market motion</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data warehousing, cloud-based data warehousing technologies, modern data platform architectures, traditional data warehousing techniques, industry domain knowledge</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8151567002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6a4bf88-cba</externalid>
      <Title>Engineering Manager, Spark Connect</Title>
      <Description><![CDATA[<p>We are building the world&#39;s best data and AI infrastructure platform so our customers can solve the world&#39;s toughest problems. The Spark Platform organisation builds the core technologies that power Databricks and the Apache Spark ecosystem. We are looking for an Engineering Manager to lead the Spark Connect team.</p>
<p>This leader will own the Databricks Spark Connect platform, drive reliability and execution for a critical Serverless component, and lead our open source Spark Connect strategy in Apache Spark. This includes owning the OSS roadmap, partnering across teams, and driving adoption of Spark Connect in the open source ecosystem.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the team responsible for Spark Connect, a critical component of Databricks Serverless.</li>
<li>Own platform reliability, scalability, and operational excellence.</li>
<li>Drive Databricks&#39; technical leadership in open source Spark Connect.</li>
<li>Define and execute the strategy for OSS Spark Connect adoption and ecosystem growth.</li>
<li>Partner across product, runtime, serverless, and OSS stakeholders to shape roadmap and architecture.</li>
<li>Hire and grow a strong engineering team with high technical and operational standards.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>10+ years of experience in distributed systems, infrastructure, or data platforms.</li>
<li>3+ years of experience managing high-performing engineering teams.</li>
<li>Strong technical depth and a track record of owning critical production systems.</li>
<li>Experience leading cross-functional initiatives and influencing technical direction.</li>
<li>Passion for open source, developer platforms, and large-scale systems.</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 2 Pay Range $180,500-$248,150 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,500-$248,150 USD</Salaryrange>
      <Skills>distributed systems, infrastructure, data platforms, engineering management, open source, developer platforms, large-scale systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8502969002</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e68e5c3b-1e2</externalid>
      <Title>Lakebase Account Executive</Title>
      <Description><![CDATA[<p>We are seeking a Lakebase Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>
<p>As a Lakebase Account Executive, you will drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</p>
<p>You will lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</p>
<p>You will sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</p>
<p>You will run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</p>
<p>You will orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</p>
<p>You will compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</p>
<p>You will align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</p>
<p>You will partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</p>
<p>You will enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</p>
<p>This role requires the ability to operate across two key motions simultaneously:</p>
<p>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</p>
<p>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</p>
<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>
<p>Success in this role requires strength in four areas:</p>
<p>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</p>
<p>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</p>
<p>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</p>
<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>
<p>The interview process is designed to evaluate candidates across all four of these dimensions.</p>
<p>We are looking for a candidate with 7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</p>
<p>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</p>
<p>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</p>
<p>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</p>
<p>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</p>
<p>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</p>
<p>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</p>
<p>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</p>
<p>Bachelor’s degree or equivalent practical experience.</p>
<p>Preferred qualifications include experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</p>
<p>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</p>
<p>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</p>
<p>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</p>
<p>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</p>
<p>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, operational databases, OLTP workloads, transactional cloud database services, data platforms, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics, AI-native applications, agent-driven applications, low-latency, highly scalable operational data services</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8449848002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6d97a39-7f0</externalid>
      <Title>Senior Engineering Manager, Activation</Title>
      <Description><![CDATA[<p>Job Title: Senior Engineering Manager, Activation</p>
<p>Join us at Brex, the intelligent finance platform that enables companies to spend smarter and move faster. We&#39;re looking for a seasoned engineering leader to lead our engineering group focused on building the systems and product experiences that power customer activation at Brex.</p>
<p>As a Senior Engineering Manager, you will be responsible for driving business and product strategies, collaborating with cross-functional partners, leveraging AI to reimagine and automate onboarding and implementation workflows, and leading and managing multiple teams of engineers.</p>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field</li>
<li>Strong technical background and understanding of software development principles</li>
<li>Expertise leading full-stack engineering teams delivering end-to-end product experiences</li>
<li>Demonstrated track record of shipping customer-facing features across multiple release cycles</li>
<li>3+ years of experience managing or leading multiple technical teams in a high-growth environment</li>
</ul>
<p>Bonus points:</p>
<ul>
<li>Experience with data platforms such as Snowflake, Hex, or similar</li>
<li>You have started your own technology venture or were an early technical founder/employee</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $300,000 - $375,000. However, the starting base pay will depend on a number of factors including the candidate&#39;s location, skills, experience, market demands, and internal pay parity.</p>
<p>If you&#39;re a champion for the customer and constantly put yourself in their shoes to create intuitive, frictionless experiences, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000 - $375,000</Salaryrange>
      <Skills>Leadership, Software Development, AI, Data Platforms, Full-Stack Engineering, Snowflake, Hex</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial platform that provides corporate cards and banking services to companies in over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8330487002</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3a17bc01-d7d</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>DBT Labs is seeking a Staff Software Engineer to join our Engineering team. As a seasoned engineer, you will architect and build the durable memory substrate that powers agentic analytics workflows. This platform stores not just metadata, but meaning: decisions, intent, rationale, and history , and makes it safely accessible to humans, agents, and applications.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Prototyping apt technical solutions and finding best fits for the context engine.</li>
<li>Architecting and building the core Context Platform.</li>
<li>Designing schemas and primitives for Decision Memory and enterprise context.</li>
<li>Owning context storage systems (graph, vector, event/time-based).</li>
<li>Building read/write/query APIs used by agents, products, and external apps.</li>
<li>Designing permission-aware, auditable context access.</li>
</ul>
<p>You will be working closely with agentic systems engineers and product leadership to ensure the context engine is interoperable, portable, and zero-lock-in by design.</p>
<p>In this role, you will own:</p>
<ul>
<li>Context schemas and schema evolution strategies.</li>
<li>Storage and data modeling choices.</li>
<li>Platform APIs and interfaces.</li>
<li>Security, identity propagation, and audit foundations.</li>
<li>Long-term scalability and correctness of context data.</li>
</ul>
<p>You will not own:</p>
<ul>
<li>Agent behavior or orchestration logic.</li>
<li>Business rules or governance policy decisions.</li>
<li>Product UI or workflow automation.</li>
</ul>
<p>The ideal candidate will have significant experience building distributed systems, data platforms, or infrastructure, and will be comfortable operating in ambiguous, greenfield problem spaces. They will also have deep expertise in data modeling and schema design, experience designing shared platforms used by many teams, and strong instincts around APIs, contracts, and backward compatibility.</p>
<p>Nice to have experience with knowledge graphs, metadata systems, or search/retrieval systems, experience building systems with governance, auditability, or compliance requirements, and familiarity with dbt or modern analytics stacks or developer tooling.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Distributed systems, Data platforms, Infrastructure, Data modeling, Schema design, APIs, Contracts, Backward compatibility, Knowledge graphs, Metadata systems, Search/retrieval systems, dbt, Modern analytics stacks, Developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4661362005</Applyto>
      <Location>India - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58a44dab-91a</externalid>
      <Title>Partner Solutions Architect - Japan</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across Japan. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>
<p>You will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud.</p>
<p>Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing. This is not a purely reactive enablement role. The Partner SA is expected to help shape and execute repeatable partner plays that create revenue.</p>
<p>That includes enabling partner sellers and architects, supporting account mapping and seller-to-seller engagement, helping define joint value propositions, supporting partner-led pipeline generation, and influencing product and field strategy based on what is learned in-market.</p>
<p>Internal operating docs show this motion consistently includes enablement sessions, QBR sponsorships, account planning, workshops, field events, and targeted campaigns designed to produce sourced and influenced pipeline.</p>
<p>You&#39;ll be part of a team helping dbt scale its ecosystem through better partner capability, tighter field alignment, and more repeatable pipeline generation. The role is especially important as dbt continues investing in structured partner motions and deeper engagement with major cloud and data platform partners.</p>
<p>What you&#39;ll do:</p>
<ul>
<li>Partner closely with Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>
</ul>
<ul>
<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>
</ul>
<ul>
<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>
</ul>
<ul>
<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>
</ul>
<ul>
<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>
</ul>
<ul>
<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>
</ul>
<ul>
<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>
</ul>
<ul>
<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>
</ul>
<ul>
<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>
</ul>
<ul>
<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>
</ul>
<ul>
<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>
</ul>
<ul>
<li>Travel approximately 30-40% to support partner planning, enablement, executive meetings, and field events across Japan</li>
</ul>
<p>This scope reflects how the Partner SA team is already operating: enabling partner field teams, building account-level alignment, supporting QBRs and regional events, and translating those activities into sourced and engaged pipeline.</p>
<p>What you&#39;ll need:</p>
<ul>
<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>
</ul>
<ul>
<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>
</ul>
<ul>
<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>
</ul>
<ul>
<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>
</ul>
<ul>
<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>
</ul>
<ul>
<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>
</ul>
<ul>
<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>
</ul>
<ul>
<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>
</ul>
<ul>
<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>
</ul>
<ul>
<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>
</ul>
<p>What will make you stand out:</p>
<ul>
<li>Experience working directly in partner, alliance, or ecosystem roles</li>
</ul>
<ul>
<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>
</ul>
<ul>
<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>
</ul>
<ul>
<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>
</ul>
<ul>
<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>
</ul>
<ul>
<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>
</ul>
<ul>
<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>
</ul>
<ul>
<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>
</ul>
<p>What to expect in the interview process (all video interviews unless accommodations are needed):</p>
<ul>
<li>Interview with Talent Acquisition Partner</li>
</ul>
<ul>
<li>Interview with Hiring Manager</li>
</ul>
<ul>
<li>Team Interviews</li>
</ul>
<ul>
<li>Demo Round</li>
</ul>
<p>#LI-LA1</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner engineering, customer-facing technical role, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673657005</Applyto>
      <Location>Japan - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c2aaf7ac-804</externalid>
      <Title>Security Engineer - Threat Detection</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>You will design, build, and maintain detections that identify malicious activity across Stripe&#39;s infrastructure, applications, and cloud environments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and tune high-fidelity detections across modern SIEM platforms, covering adversary TTPs across the full attack lifecycle</li>
<li>Develop detection hypotheses by researching TTPs, identifying evidence sources, and determining detection opportunities across available telemetry</li>
<li>Conduct hypothesis-driven threat hunts to identify malicious activity, uncover detection gaps, and validate security controls</li>
<li>Perform malware analysis and reverse engineering to extract indicators and inform detection strategies</li>
<li>Build network-based detections (flow, pcap, protocol analysis) and endpoint-based detections (event logs, EDR telemetry, memory/file artifacts) across Windows, Linux, and macOS</li>
<li>Partner with Threat Intelligence to operationalize intel reports into detections, hunting leads, and enrichment logic</li>
<li>Collaborate with IR, SOC, and offensive security teams to validate and refine detections based on real-world incidents and red team exercises</li>
<li>Build data pipelines, automation, and tooling that enable detection-as-code practices and scalable deployment</li>
<li>Map detection coverage to MITRE ATT&amp;CK, identifying and prioritizing gaps across key attack surfaces</li>
<li>Lead projects, mentor teammates, and champion quality standards within the team</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of experience in detection engineering, threat hunting, or security operations</li>
<li>Demonstrated experience writing detection logic in modern SIEM platforms (e.g., Splunk, Chronicle, Elastic, CrowdStrike NG-SIEM, Panther, Microsoft Sentinel)</li>
<li>Strong understanding of adversary tradecraft across the attack lifecycle: initial access, privilege escalation, lateral movement, defense evasion, persistence, and exfiltration</li>
<li>Ability to extract TTPs from threat intelligence reports and translate them into detection opportunities</li>
<li>Experience developing network-based and endpoint-based detections across multiple OS platforms (Windows, Linux, macOS)</li>
<li>Experience analyzing telemetry across endpoint, network, cloud (AWS/GCP/Azure), identity, and application log sources</li>
<li>Proficiency in detection/query languages (SPL, KQL, EQL, YARA-L, SQL) and programming (Python or similar)</li>
<li>Strong communication skills with the ability to document detection logic and explain findings to technical and non-technical audiences</li>
<li>Adversarial mindset , understanding how attackers operate to build detections that catch real-world threats</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience in detection engineering or threat hunting within fintech, financial services, or highly regulated environments</li>
<li>Background in malware analysis, reverse engineering, or threat research</li>
<li>Experience with purple team operations , collaborating with offensive security to validate detections</li>
<li>Familiarity with big data platforms (Databricks, Trino, PySpark) for large-scale log analysis</li>
<li>Proficiency with AI/LLM-assisted development tools (Claude Code, Cursor, GitHub Copilot) applied to detection workflows</li>
<li>Interest in agentic automation , using LLMs to augment hunting, tuning, or triage</li>
<li>Experience with detection validation tools (Atomic Red Team, ATT&amp;CK Evaluations)</li>
<li>Contributions to open-source detection content, research, or conference presentations</li>
<li>Relevant certifications such as HTB CDSA, GCIH, GCFA, GNFA, OSCP, TCM PMAT, or GREM</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>detection engineering, threat hunting, security operations, SIEM platforms, adversary tradecraft, network-based detections, endpoint-based detections, telemetry analysis, detection/query languages, programming, communication skills, fintech, financial services, malware analysis, reverse engineering, purple team operations, big data platforms, AI/LLM-assisted development tools, agentic automation, detection validation tools, open-source detection content, relevant certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7827230</Applyto>
      <Location>Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4ea7999b-3d8</externalid>
      <Title>Resident Solutions Architect - Healthcare &amp; Life Sciences</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494145002</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d6421dea-6e3</externalid>
      <Title>Strategic Hunter Account Executive - Lakebase</Title>
      <Description><![CDATA[<p>We are seeking a Strategic Hunter Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>
<p>This high-impact role sits within the Lakebase Go-To-Market team and partners closely with regional Account Executives to drive adoption of Lakebase with platform, application, and data teams.</p>
<p>Lakebase gives customers a unified, governed foundation for operational workloads and AI-native applications, helping them move away from a fragmented estate of point databases toward a modern, scalable, serverless Postgres service.</p>
<p>If you want to be at the forefront of operational databases for AI and intelligent applications at one of the fastest-growing data and AI companies in the world, this is your opportunity.</p>
<p><strong>The impact you will have</strong></p>
<ul>
<li>Drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</li>
</ul>
<ul>
<li>Lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</li>
</ul>
<ul>
<li>Sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</li>
</ul>
<ul>
<li>Run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</li>
</ul>
<ul>
<li>Orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</li>
</ul>
<ul>
<li>Compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</li>
</ul>
<ul>
<li>Align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</li>
</ul>
<ul>
<li>Partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</li>
</ul>
<ul>
<li>Enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</li>
</ul>
<p><strong>What success looks like in this role</strong></p>
<p>This role requires the ability to operate across two key motions simultaneously:</p>
<ul>
<li>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</li>
</ul>
<ul>
<li>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</li>
</ul>
<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>
<p>Success in this role requires strength in four areas:</p>
<ul>
<li>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</li>
</ul>
<ul>
<li>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</li>
</ul>
<ul>
<li>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</li>
</ul>
<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>
<p><strong>What we look for</strong></p>
<ul>
<li>7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</li>
</ul>
<ul>
<li>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</li>
</ul>
<ul>
<li>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</li>
</ul>
<ul>
<li>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</li>
</ul>
<ul>
<li>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</li>
</ul>
<ul>
<li>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</li>
</ul>
<ul>
<li>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</li>
</ul>
<ul>
<li>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</li>
</ul>
<ul>
<li>Bachelor’s degree or equivalent practical experience.</li>
</ul>
<p><strong>Preferred qualifications</strong></p>
<ul>
<li>Experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</li>
</ul>
<ul>
<li>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</li>
</ul>
<ul>
<li>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</li>
</ul>
<ul>
<li>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</li>
</ul>
<ul>
<li>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</li>
</ul>
<ul>
<li>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p><strong>Our Commitment to Diversity and Inclusion</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, operational databases, Postgres, MySQL, cloud-native DBaaS, data/AI infrastructure, technical buyers, business leaders, modern data and application architectures, cloud-native services, microservices, event-driven systems, AI and analytics strategies, technical stakeholders, business stakeholders, value selling skills, discovering pain, building a business case, quantified outcomes, communication, storytelling, negotiation skills, OLTP workloads, transactional cloud database services, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics use cases, AI-native applications, agent-driven applications, high-growth environments, category-creating environments, partner collaborations, ISV collaborations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8477547002</Applyto>
      <Location>Bengaluru, India; Mumbai, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>47807ca3-e36</externalid>
      <Title>Strategic AI/BI Account Executive</Title>
      <Description><![CDATA[<p>We are seeking a Strategic AI/BI Account Executive to help enterprise customers transform how business users interact with data. This high-impact role sits within the AI Go-To-Market team and partners closely with Enterprise Account Executives to drive adoption of Databricks AI/BI and Genie in APJ.</p>
<p>You will help organisations move beyond static dashboards to governed, conversational, AI-powered analytics at the centre of the convergence of business intelligence, data platforms, and generative AI. Enterprise analytics is rapidly evolving from dashboards and static reporting to conversational, AI-driven decision platforms. Databricks AI/BI and Genie empower business users to securely interact with governed data using natural language, transforming the data platform into a true decision platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Partner with Enterprise AEs to identify, qualify, and close AI/BI opportunities</li>
<li>Engage C-level, analytics, and line-of-business leaders to modernise analytics strategies</li>
<li>Displace or expand legacy BI platforms with AI-powered, governed analytics solutions</li>
<li>Lead conversations around semantic governance, self-service analytics, and natural language data access</li>
<li>Drive proof-of-value engagements and scale enterprise-wide adoption</li>
<li>Align AI/BI initiatives to measurable business outcomes (productivity, speed to insight, revenue impact)</li>
<li>Enable field teams and serve as a subject matter expert on modern analytics architectures</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Enterprise sales experience in BI, analytics, data platforms, or AI/ML</li>
<li>Strong understanding of modern analytics architectures and data governance</li>
<li>Ability to sell to both technical and business stakeholders</li>
<li>Executive presence and experience navigating complex buying cycles</li>
<li>Passion for AI and the impact of GenAI on enterprise analytics</li>
<li>Experience operating in a specialist or overlay sales model</li>
<li>Ability to translate technical capabilities into clear business value</li>
<li>7+ years of Enterprise Sales experience, exceeding quotas in larger accounts</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience with modern BI platforms such as Tableau, Power BI, Looker, or ThoughtSpot</li>
<li>Familiarity with semantic layers, metrics stores, or governed data models</li>
<li>Understanding of lakehouse architectures and cloud data platforms</li>
<li>Exposure to GenAI, natural language interfaces, or conversational applications</li>
<li>Consulting or solution design experience in customer-facing roles</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales experience in BI, analytics, data platforms, or AI/ML, Strong understanding of modern analytics architectures and data governance, Ability to sell to both technical and business stakeholders, Executive presence and experience navigating complex buying cycles, Passion for AI and the impact of GenAI on enterprise analytics, Experience with modern BI platforms such as Tableau, Power BI, Looker, or ThoughtSpot, Familiarity with semantic layers, metrics stores, or governed data models, Understanding of lakehouse architectures and cloud data platforms, Exposure to GenAI, natural language interfaces, or conversational applications, Consulting or solution design experience in customer-facing roles</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company with over 10,000 organisations worldwide relying on its data intelligence platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8441884002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>276f3a05-2e9</externalid>
      <Title>Field CTO - America Industries</Title>
      <Description><![CDATA[<p>We are seeking a Field Chief Technology Officer (Field CTO) for the Americas Industries Business Unit to be a senior, customer-facing technology and business transformation thought leader for our most strategic, often global, accounts in regulated industries.</p>
<p>This individual contributor role sits at the intersection of data and AI strategy, industry transformation, and executive relationship-building, working closely with C-level leaders to drive multi-year change on the data platform while representing real-world needs back into Databricks.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building and maintaining trusted-advisor relationships with C-level executives in large US-based and global accounts, especially in highly regulated industries.</li>
<li>Cultivating a strong social and professional network across customer executives, boards, key industry bodies, and partners.</li>
<li>Shaping executive thinking on modern data and AI architectures, with emphasis on Lakehouse and data platform modernization as the primary lever for long-term Gen AI impact.</li>
<li>Leading C-level briefings, strategy sessions, and multi-day workshops that connect business outcomes, regulatory constraints, and operating model change to concrete Databricks-based roadmaps.</li>
<li>Serving as a deep technical counterpart in the field, maintaining L200–L300 proficiency across Databricks products and being able to credibly engage architects, data engineers, and data scientists on solution design and trade-offs.</li>
<li>Generalizing patterns from the field into reusable reference architectures, industry blueprints, and best practices for regulated industries, and sharing them through blogs, webinars, whitepapers, and conference keynotes.</li>
<li>Orchestrating the broader ecosystem (cloud providers, GSIs, consultancies, ISVs) around customer objectives, ensuring Databricks is at the center of multi-year transformation programs rather than isolated projects.</li>
<li>Partnering with Account Executives, Solutions Architects, Industry Leads, and Product Specialists to drive complex, multi-year sales cycles, securing platform decisions and expansions while influencing ACV and consumption growth.</li>
<li>Providing structured, prioritized feedback from strategic customers into Product, Engineering, and Field leadership to influence product roadmap, especially around data, governance, security, and regulated-industry requirements.</li>
<li>Mentoring senior Field Engineering and industry-focused talent, contributing to a pipeline of principal- and CTO-level leaders and codifying ways of working for complex, regulated accounts.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>15+ years of experience spanning enterprise technology and consulting, including leading or advising on multi-year data platform and analytics transformations in large, complex organizations.</li>
<li>Significant time spent inside a large enterprise software or cloud company in roles that required navigating matrixed organizations and driving change at scale, combined with direct industry exposure rather than a career spent solely in horizontal software.</li>
<li>Experience in or with regulated industries, with familiarity with regulatory and compliance considerations affecting data and AI platforms.</li>
<li>A background that blends hands-on technology and architecture work on data platforms and analytics, organizational and operating model change, executive consulting or advisory, and proven ability to operate as a highly credible peer to C-level executives.</li>
<li>Strong, proactive networker who is naturally curious about which associations, councils, and forums matter for a given customer set, and who uses those networks to create new executive entry points and opportunities.</li>
<li>Demonstrated longevity and impact in prior roles, with evidence of building and sustaining long-term customer relationships and programs rather than frequent short stints.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$249,800-$343,400 USD</Salaryrange>
      <Skills>data and AI strategy, industry transformation, executive relationship-building, Lakehouse and data platform modernization, Gen AI impact, L200–L300 proficiency across Databricks products, solution design and trade-offs, reference architectures, industry blueprints, best practices for regulated industries, cloud providers, GSIs, consultancies, ISVs, complex, multi-year sales cycles, platform decisions and expansions, ACV and consumption growth, product roadmap, data governance, security, regulated-industry requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform. It has over 10,000 organizations worldwide as clients.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8306218002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>015afe59-9fd</externalid>
      <Title>Data Analyst II</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>
<p>We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>
<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p>Data at Brex</p>
<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations.</p>
<p>Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>
<p>What you’ll do</p>
<p>As a Data Analyst II (DA), you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>
<p>You will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses.</p>
<p>This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>
<p>Where you’ll work</p>
<p>This role will be based in our New York office.</p>
<p>We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home.</p>
<p>We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday.</p>
<p>As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities</p>
<p>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</p>
<p>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</p>
<p>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</p>
<p>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</p>
<p>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</p>
<p>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</p>
<p>Contribute to the automation of recurring analyses and reporting workflows using Python.</p>
<p>Requirements</p>
<p>3+ years of experience in data analytics or a related role in a professional setting.</p>
<p>2+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</p>
<p>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</p>
<p>Experience with Python for data analysis, automation, or scripting.</p>
<p>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</p>
<p>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</p>
<p>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</p>
<p>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</p>
<p>Bonus points</p>
<p>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</p>
<p>Familiarity with dbt for data modeling and transformation.</p>
<p>Exposure to data pipeline orchestration tools (e.g., Airflow).</p>
<p>Experience in fintech, financial services, or payments.</p>
<p>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</p>
<p>Compensation</p>
<p>The expected salary range for this role is $93,600 - $117,000.</p>
<p>However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>
<p>Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$93,600 - $117,000</Salaryrange>
      <Skills>SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, Financial services, Payments</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8463702002</Applyto>
      <Location>New York, New York, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>50808499-c0b</externalid>
      <Title>Senior Customer Solutions Resident Architect</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. Since 2016, we’ve grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</p>
<p>As of February 2025, we’ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers, including AstraZeneca, Sky, Nasdaq, Volvo, JetBlue, and SafetyCulture.</p>
<p>We’re backed by top-tier investors including Andreessen Horowitz, Sequoia Capital, and Altimeter. At our core, we believe in empowering data practitioners:</p>
<ul>
<li>Reliable, high-quality data is the fuel that propels AI-powered data engineering.</li>
</ul>
<ul>
<li>AI is changing data work, fast. dbt’s data control plane keeps data engineers ahead of that curve.</li>
</ul>
<ul>
<li>We empower engineers to deliver reliable, governed data faster, cheaper, and at scale.</li>
</ul>
<p>dbt Labs is now synonymous with analytics engineering, defining the modern data stack and serving as the data control plane for enterprise teams around the world. And we’re just getting started..</p>
<p>We’re growing fast and building a team of passionate, curious people across the globe. Learn more about what makes us special by checking out our values.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking an experienced Senior Customer Solutions Resident Architects to join our team. In this role, you will drive critical customer outcomes by delivering high-impact technical guidance to strategic accounts. You will be part of a high-visibility initiative that supports pre-sales, accelerates adoption, enables key migrations, and mitigates churn risks.</p>
<p>This role is designed to deploy RA-level expertise flexibly, aligning with customer and business needs to drive growth, retention, and expansion.</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Accelerate Customer Success Across the Lifecycle</li>
</ul>
<ul>
<li>Support strategic pre-sales opportunities by providing technical expertise to prospects</li>
</ul>
<ul>
<li>Assist in launching and onboarding new customers who have not purchased RA services, ensuring they successfully adopt dbt Cloud</li>
</ul>
<ul>
<li>Execute proactive adoption plays, including migrations, new feature implementations (e.g., Semantic Layer, Mesh), and major version upgrades</li>
</ul>
<ul>
<li>Lead reactive adoption initiatives to de-risk churn or contraction and position accounts for future growth</li>
</ul>
<ul>
<li>Deliver Technical Excellence</li>
</ul>
<ul>
<li>Advise on architecture, design, implementation, troubleshooting, and best practices in dbt Cloud environments</li>
</ul>
<ul>
<li>Build solution MVPs and guide long-term technical strategies tailored to customer needs</li>
</ul>
<ul>
<li>Engage on multiple projects simultaneously with clear scoping, start and end dates, and outcome tracking</li>
</ul>
<ul>
<li>Collaborate Across Teams</li>
</ul>
<ul>
<li>Partner closely with Customer Solutions Architects (CSAs), Sales, Solutions Architects, Training, and Support</li>
</ul>
<ul>
<li>Provide feedback to Product and Engineering to improve customer experience and prioritize technical needs</li>
</ul>
<ul>
<li>Champion customer success through thoughtful, transparent communication and cross-functional collaboration</li>
</ul>
<ul>
<li>Advance Best Practices and Team Impact</li>
</ul>
<ul>
<li>Help build out and refine this evolving function alongside the broader RA organization</li>
</ul>
<ul>
<li>Track and manage capacity and engagement effectiveness similarly to other RA-led initiatives</li>
</ul>
<p><strong>What You’ll Need</strong></p>
<ul>
<li>5+ years of experience in technical customer-facing roles such as post-sales consulting, technical architecture, or solution delivery</li>
</ul>
<ul>
<li>Expertise with at least one modern cloud data platform (Snowflake, Databricks, BigQuery, or Redshift)</li>
</ul>
<ul>
<li>Hands-on experience deploying or configuring dbt Cloud, with at least 1 year working with dbt</li>
</ul>
<ul>
<li>Strong proficiency in SQL; working knowledge of Python in analytics contexts preferred</li>
</ul>
<ul>
<li>Comfort leading technical project delivery , managing scope, timelines, and stakeholder expectations across multiple simultaneous engagements</li>
</ul>
<ul>
<li>Clear, concise communication skills for both technical and executive audiences</li>
</ul>
<ul>
<li>A collaborative mindset , thriving in a remote, transparent, and highly cross-functional organization</li>
</ul>
<ul>
<li>Willingness to travel 2–4 times per year for company-wide events</li>
</ul>
<p><strong>What Will Make You Stand Out</strong></p>
<ul>
<li>dbt Analytics Engineering Certification</li>
</ul>
<ul>
<li>Ability to influence technical direction and build consensus across internal and customer teams</li>
</ul>
<ul>
<li>Experience with traditional enterprise ETL tools (e.g., Informatica, Datastage, Talend) and how they relate to modern data workflows</li>
</ul>
<ul>
<li>Familiarity with strategic sales or renewal processes, including proactive and reactive adoption efforts</li>
</ul>
<ul>
<li>Proven success accelerating usage, adoption, and expansion in large, complex accounts</li>
</ul>
<p><strong>Remote Hiring Process</strong></p>
<ul>
<li>Interview with a Talent Acquisition Partner</li>
</ul>
<ul>
<li>Interview with Hiring Manager</li>
</ul>
<ul>
<li>Task</li>
</ul>
<ul>
<li>Task Review</li>
</ul>
<ul>
<li>Final Values Interview</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
</ul>
<ul>
<li>401k plan with 3% guaranteed company contribution</li>
</ul>
<ul>
<li>Comprehensive healthcare coverage</li>
</ul>
<ul>
<li>Generous paid parental leave</li>
</ul>
<ul>
<li>Health &amp; wellness stipend</li>
</ul>
<ul>
<li>Flexible stipends for:</li>
</ul>
<ul>
<li>Home office setup</li>
</ul>
<ul>
<li>Learning and development</li>
</ul>
<ul>
<li>Office space</li>
</ul>
<ul>
<li>And more!</li>
</ul>
<p><strong>Compensation</strong></p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab’s total rewards during your interview process.</p>
<p>In Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, and Austin, an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role in the specific locations listed is: $163,000 - $200,000</li>
</ul>
<ul>
<li>The typical starting salary range for this role is: $146,000 - $180,000</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$146,000 - $180,000</Salaryrange>
      <Skills>modern cloud data platform, dbt Cloud, SQL, Python, technical project delivery, clear, concise communication skills, dbt Analytics Engineering Certification, traditional enterprise ETL tools, strategic sales or renewal processes, proven success accelerating usage, adoption, and expansion</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a company that helps data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4682381005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>37fdd29a-858</externalid>
      <Title>Principal Product Manager, AI Compliance</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Product Manager to lead our AI compliance efforts. As a key member of our product team, you will define and drive the product strategy that ensures our AI models are developed, deployed, and used in a way that aligns with our values, strategic objectives, and account for relevant obligations.</p>
<p>Your primary responsibility will be to drive AI compliance and governance product strategy across Pinner- and advertiser-facing AI products. This includes ensuring that model training and inference align with global regulatory obligations, internal policies, and Pinterest&#39;s values.</p>
<p>You will own user-facing and internal data controls for AI training, setting requirements and roadmaps that balance product performance, privacy, and regulatory needs while ensuring effective internal enforcement and awareness.</p>
<p>You will also expand and support AI compliance tooling, including detection and labeling systems, governance workflows, and inventories, so teams across Pinterest can build compliant AI experiences efficiently and consistently.</p>
<p>In addition, you will partner deeply with Legal, Privacy, Policy, Security, and Engineering to translate complex regulatory requirements into clear, prioritized product workstreams and scalable platform capabilities.</p>
<p>As a Principal Product Manager, you will lead high-visibility, cross-functional programs that span multiple product areas, infrastructure, and engineering teams, creating alignment on tradeoffs, sequencing, and resourcing for AI governance initiatives across the company.</p>
<p>You will serve as a principal-level thought partner on AI governance, influencing long-range AI product strategy, identifying new risk or compliance gaps early, and advocating for the infrastructure and processes needed to keep Pinterest ahead of regulatory change.</p>
<p>To be successful in this role, you will need to have deep AI/ML or data platform product experience, including building or scaling products that rely on model training data, user data, or complex backend systems. Experience in AI safety, responsible AI, or AI compliance is a strong plus.</p>
<p>You will also need to have proven success driving cross-company, multi-year programs at a Staff/Principal level (or equivalent), especially in ambiguous problem spaces that require both strategic frameworks and hands-on execution.</p>
<p>Additionally, you will need to have experience working closely with Legal, Privacy, Policy, and Security teams to operationalize regulatory or policy requirements into product features, data controls, and platform capabilities.</p>
<p>Fluency with data and governance concepts is also required, including comfort reasoning about data classification, lineage, retention, opt-outs, and enforcement, and partnering with technical teams to define robust measurement and monitoring.</p>
<p>Finally, you will need to have exceptional communication and influence skills, with the ability to align executive stakeholders and cross-functional partners around a clear point of view on AI risk, compliance tradeoffs, and sequencing.</p>
<p>If you are a motivated and experienced product leader who is passionate about AI governance and compliance, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$228,911-$471,286 USD</Salaryrange>
      <Skills>AI/ML, Data Platform, Product Management, Compliance, Governance, Regulatory Affairs, Policy Development, Risk Management, Communication, Influence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save visual content. It has over 320 million monthly active users.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7494948</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e40d534f-76a</externalid>
      <Title>Resident Architect</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</p>
<p>We&#39;re seeking an experienced Resident Architect (RA) with a passion for solving challenging problems with dbt to join our Professional Services team. RAs are billable to dbt Enterprise customers and help achieve our mission to empower data developers to create and disseminate organisational knowledge.</p>
<p>Responsibilities</p>
<ul>
<li>Work on a variety of impactful customer technical projects - inclusive of implementation, troubleshooting configurations, instilling best practices, and solutioning MVPs and long-term solutions to customer-specific requirements</li>
</ul>
<ul>
<li>Consult on architecture and design</li>
</ul>
<ul>
<li>Ensure our most strategic enterprise customers are adopting the product</li>
</ul>
<ul>
<li>Collaborate with other internal customer-facing teams at dbt Labs - Sales, Solution Architects, Training, Support</li>
</ul>
<ul>
<li>Provide critical feedback to dbt Labs product and engineering teams to improve and prioritise customer requests and ensure rapid resolution for engagement-specific issues</li>
</ul>
<ul>
<li>Become a product expert with dbt in the context of the modern data stack (if you aren&#39;t already)</li>
</ul>
<p>What You&#39;ll Need</p>
<ul>
<li>4+ years&#39; experience working with technical data tooling, even better if it is in a customer-facing post-sales, technical architect or consulting role</li>
</ul>
<ul>
<li>Deep expertise in at least one data platform (Snowflake, Databricks, BigQuery, Redshift)</li>
</ul>
<ul>
<li>Experience using, deploying, or configuring dbt in an enterprise setting - working with dbt for minimum 1 year</li>
</ul>
<ul>
<li>Proficiency in writing SQL and Python in analytics contexts</li>
</ul>
<ul>
<li>You look forward to building skills in technical areas that support deployment and integration of dbt enterprise solutions to complete customer projects</li>
</ul>
<ul>
<li>Customer focus, embracing one of core values that users are our best advocates</li>
</ul>
<ul>
<li>Strong organisational skills with the ability to manage multiple technical projects simultaneously - including defining scope, tracking timelines, and ensuring deliverables are met</li>
</ul>
<ul>
<li>Clear and concise communicator with the ability to engage internal and external stakeholders, effectively explain complex technical or organisational challenges, and propose thoughtful, iterative solutions</li>
</ul>
<ul>
<li>The ability to thrive in a remote organisation that highly values transparency and cross-collaboration</li>
</ul>
<ul>
<li>Travel approximately 2-4x/year for customer onsite sessions, team offsites, and company events will be expected</li>
</ul>
<p>What Will Make You Stand Out</p>
<ul>
<li>You have obtained the dbt Analytics Engineering Certification</li>
</ul>
<ul>
<li>You have the ability to advise on dbt enterprise recommendations, and build direction/consensus with the customer to move forward</li>
</ul>
<ul>
<li>Experience with traditional Enterprise ETL tooling (Informatica, Datastage, Talend)</li>
</ul>
<p>Remote Hiring Process</p>
<ul>
<li>Interview with a Talent Acquisition Partner</li>
</ul>
<ul>
<li>Hiring Manager Interview</li>
</ul>
<ul>
<li>Technical Task + Presentation</li>
</ul>
<ul>
<li>Team Interview</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
</ul>
<ul>
<li>401k plan with 3% guaranteed company contribution</li>
</ul>
<ul>
<li>Comprehensive healthcare coverage</li>
</ul>
<ul>
<li>Generous paid parental leave</li>
</ul>
<ul>
<li>Flexible stipends for:</li>
</ul>
<ul>
<li>Health &amp; Wellness</li>
</ul>
<ul>
<li>Home Office Setup</li>
</ul>
<ul>
<li>Cell Phone &amp; Internet</li>
</ul>
<ul>
<li>Learning &amp; Development</li>
</ul>
<ul>
<li>Office Space</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab&#39;s total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is:</li>
</ul>
<p>$114,000 - $137,700</p>
<ul>
<li>The typical starting salary range for this role in the select locations listed is:</li>
</ul>
<p>$126,000 - $153,000</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,000 - $137,700</Salaryrange>
      <Skills>dbt, data platform, Snowflake, Databricks, BigQuery, Redshift, SQL, Python, analytics engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4627942005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>85f1f87e-70f</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
</ul>
<ul>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461327002</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ffd169d9-40b</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, data platforms &amp; analytics, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified data intelligence platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461239002</Applyto>
      <Location>Atlanta, Georgia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>760c3e88-e35</externalid>
      <Title>Senior Product Manager, Data</Title>
      <Description><![CDATA[<p>Job Title: Senior Product Manager, Data</p>
<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>
<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>
<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>
<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>
<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>
<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>
<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>
<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>
<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>
<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>
<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>
<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>
<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>
<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>
<li>Awareness of data security, compliance, and governance best practices</li>
<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>Salary Range: $143,000 to $210,000</p>
<p>Benefits:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Workplace:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$143,000 to $210,000</Salaryrange>
      <Skills>data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud-based platform that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649824006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3168d7d3-70b</externalid>
      <Title>Partner Solutions Architect - North America</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across North America. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>
<p>As a Partner Solutions Architect, you will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud. Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing.</p>
<p>Responsibilities</p>
<ul>
<li>Partner closely with North America Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>
<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>
<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>
<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>
<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>
<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>
<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>
<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>
<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>
<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>
<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>
<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>
<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>
<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>
<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>
<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>
<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>
<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>
<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>
<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>
</ul>
<p>What will make you stand out</p>
<ul>
<li>Experience working directly in partner, alliance, or ecosystem roles</li>
<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>
<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>
<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>
<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>
<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>
<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>
<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation (and yes we use it!)</li>
<li>Pension coverage</li>
<li>Excellent healthcare</li>
<li>Paid Parental Leave</li>
<li>Wellness stipend</li>
<li>Home office stipend, and more!</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner development, field engineering, sales engineering, consulting, partner engineering, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a software company that provides an analytics engineering platform used by over 90,000 teams every week, driving data transformations and AI use cases. As of February 2025, they have surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673630005</Applyto>
      <Location>Canada - Remote; US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>26f523c0-bbd</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494154002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2c0fc802-cf3</externalid>
      <Title>Staff Product Manager, AI Platform</Title>
      <Description><![CDATA[<p>At Databricks, we are building the world&#39;s best data and AI infrastructure platform. As a Staff Product Manager on the AI Platform team, you will drive the vision and roadmap for AI platform product areas and define how customers build, train, deploy, and monitor AI and ML systems on Databricks.</p>
<p>You will own the product roadmap for AI platform areas, defining what we build, why, and in what order, to accelerate customer adoption of AI and ML in production. You will drive strategy for key AI platform capabilities, shaping how enterprises operationalize AI at scale.</p>
<p>You will partner closely with engineering teams to make deeply technical decisions about ML infrastructure, from distributed training architectures to real-time serving systems. You will represent the voice of the customer by engaging directly with enterprise ML teams, translating their pain points and workflows into platform capabilities that simplify the path to production AI.</p>
<p>You will collaborate with GTM, Solutions Architecture, and Customer Success teams to drive enterprise adoption, shape field enablement, and inform competitive positioning. You will define pricing, packaging, and commercialization strategy for AI platform features, working with business teams to maximize value capture.</p>
<p>You will grow end-user engagement with Databricks AI tools by identifying adoption bottlenecks and partnering cross-functionally to remove them.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$181,700-$249,800 USD</Salaryrange>
      <Skills>Deep technical background in computer science, electrical engineering, or equivalent degree, Experience with ML/AI infrastructure, data platforms, or cloud services, Proven enterprise B2B product management experience with highly technical customers, Ability to engage credibly with world-class ML engineers, Familiarity with recommendation systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data and AI workloads. It was founded by the original creators of Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8420609002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ff568ca-d59</externalid>
      <Title>Senior Software Engineer - Data Infrastructure Services</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a senior software engineer to join its Data Platforms Team. The ideal candidate will have experience in database and stream processing, and will be responsible for designing and implementing the platform to deliver data to teams with a focus on providing managed solutions through APIs.</p>
<p>The successful candidate will participate in operations and scaling of relational data platforms, develop a stream processing architecture, and improve the performance, security, reliability, and scalability of our data platforms and related services. They will also establish guidelines, guardrails for data access and storage for stakeholder teams, and ensure compliance with standards for data protection regulation.</p>
<p>In addition to technical skills, the ideal candidate will be able to grow, change, invest in their teammates, be invested-in, share their ideas, listen to others, be curious, have fun, and be themselves. CoreWeave values diversity and inclusion, and encourages candidates from all backgrounds to apply.</p>
<p>Key responsibilities:</p>
<ul>
<li>Design and implement the platform to deliver data to teams with a focus on providing managed solutions through APIs</li>
</ul>
<ul>
<li>Participate in operations and scaling of relational data platforms</li>
</ul>
<ul>
<li>Develop a stream processing architecture</li>
</ul>
<ul>
<li>Improve the performance, security, reliability, and scalability of our data platforms and related services</li>
</ul>
<ul>
<li>Establish guidelines, guardrails for data access and storage for stakeholder teams</li>
</ul>
<ul>
<li>Ensure compliance with standards for data protection regulation</li>
</ul>
<ul>
<li>Grow, change, invest in your teammates, be invested-in, share your ideas, listen to others, be curious, have fun, and be yourself</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in a software or infrastructure engineering industry</li>
</ul>
<ul>
<li>Experience operating services in production and at scale</li>
</ul>
<ul>
<li>Familiarity with one of the distributed NewSQL datastores such as CockroachDB, TiDB, YDB, Yugabyte and/or stream processing tools such as NATS or Kafka</li>
</ul>
<ul>
<li>Experience with designing and operating these systems at scale</li>
</ul>
<ul>
<li>Familiarity with Kubernetes and have interest or comfortable with using it for event-driven and/or stateful orchestration</li>
</ul>
<ul>
<li>Proficiency in Go/Python/Java and interested in contributing to open source</li>
</ul>
<p>ExperienceLevel: senior</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>database and stream processing, API design and implementation, operational and scaling of relational data platforms, stream processing architecture, performance, security, reliability, and scalability of data platforms, data access and storage guidelines, data protection regulation compliance, Kubernetes, Go/Python/Java, open source contribution</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671479006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3d57b93e-423</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, data architecture, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456948002</Applyto>
      <Location>Atlanta, Georgia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c1a6c403-cf8</externalid>
      <Title>Strategic Account Executive, UAE</Title>
      <Description><![CDATA[<p>Want to help solve the world&#39;s toughest problems with data and AI?</p>
<p>This is what we do every day at Databricks.</p>
<p>We are looking for an Account Executive to manage our most strategic customer in the UAE. The role will be based in London and will require you to travel to the UAE on a regular basis.</p>
<p>As a Strategic Account Executive, you are a sales professional experienced in selling to large Enterprise accounts. You know how to sell innovation and change through customer vision expansion and can guide deals forward to compress decision cycles.</p>
<p>Key responsibilities:</p>
<ul>
<li>Assess your territory and develop a successful execution strategy</li>
<li>Exceed activity and quarterly revenue targets</li>
<li>Track all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>
<li>Identify new use case opportunities and showcase value to existing customers</li>
<li>Promote the value of the Databricks&#39; Data Intelligence Platform</li>
<li>Orchestrate and utilise our field engineering teams to ensure valuable outcomes for clients</li>
<li>Build and demonstrate value with all engagements to guide successful negotiations to close point</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive understanding of the data platform, open source and cloud ecosystems</li>
<li>Highly skilled in prospecting research and ability to map out key stakeholders</li>
<li>Demonstrated success in Value Selling and developing a mutual action plan</li>
<li>Ability to influence decision-making and strategy with customer leadership teams</li>
<li>Ability to establish credibility with the C-suite</li>
<li>Adept in selling to technical buyers</li>
<li>Mastery of MEDDPICC</li>
<li>Bachelor&#39;s Degree or relevant work experience</li>
<li>Fluency in English is required, fluency in Arabic is preferred</li>
</ul>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platform, open source, cloud ecosystems, prospecting research, value selling, MEDDPICC, Arabic</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks operates at the leading edge of the Data and AI space, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8452487002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7ba4251-36b</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>Job Title: Resident Solutions Architect - Public Sector</p>
<p>We are seeking a highly skilled Resident Solutions Architect to join our Professional Services team in Washington, D.C. As a Resident Solutions Architect, you will work with customers on short to medium-term customer engagements on their big data challenges using the Databricks platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
<li>Provide an escalated level of support for customer operational issues</li>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>Requirements:</p>
<ul>
<li>US Top Secret Clearance Required this position</li>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines</li>
<li>Documentation and white-boarding skills</li>
<li>Experience working with clients and managing conflicts</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD Zone 2 Pay Range $180,656-$248,360 USD Zone 3 Pay Range $180,656-$248,360 USD Zone 4 Pay Range $180,656-$248,360 USD</p>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p>Compliance</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis aloneabled</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, scope and timelines, documentation and white-boarding, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8356289002</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e7613e05-073</externalid>
      <Title>Customer Enablement Specialist</Title>
      <Description><![CDATA[<p>Job Title: Customer Enablement Specialist</p>
<p>Location: Bellevue, Washington</p>
<p>Department: Education &amp; Training</p>
<p>CSQ227R234</p>
<p><strong>About the Role</strong></p>
<p>This role is required to work in a hybrid office setting in our Bellevue, WA office.</p>
<p><strong>The Opportunity</strong></p>
<p>Databricks runs some of the largest customer enablement programs in the industry , workshops, digital courses, labs, and webinars that reach thousands of users. The Customer Enablement Specialist turns that reach into results. You connect engaged learners to structured training plans that drive product adoption, customer success, and measurable business impact.</p>
<p>This isn’t a sales or business development role , every conversation begins with an existing Databricks user or program participant. Your focus is on helping those customers move from initial interest to tangible capability: skilled teams, completed training milestones, and activated use cases.</p>
<p>You’ll manage a broad portfolio of accounts, supporting new and emerging personas , business users, analysts, and app developers , and helping them succeed with Databricks’ latest innovations in AI/BI, Databricks Apps, and agent-based development.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Convert participation in Databricks’ scale programs (webinars, workshops, digital learning) into structured training engagements.</li>
</ul>
<ul>
<li>Own a high-volume enablement pipeline , identifying learner needs, recommending tailored paths, and tracking adoption progress.</li>
</ul>
<ul>
<li>Deliver engaging L100–L200 sessions and demos to help new personas understand what’s possible with Databricks.</li>
</ul>
<ul>
<li>Build enablement plans for each account, tracking trained users, completion rates, and milestone achievement.</li>
</ul>
<ul>
<li>Partner with Customer Success Managers (CSMs), Account Executives (AEs), and senior CEAs to align training with customer goals and renewal cycles.</li>
</ul>
<ul>
<li>Report key metrics , trained accounts, learner growth, conversion rates, and training revenue , using data to guide your priorities.</li>
</ul>
<ul>
<li>Provide structured feedback to program and curriculum teams to sharpen future customer learning experiences.</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>2–4 years in a technical, customer-facing role , technical training, pre-sales, enablement, or customer success preferred.</li>
</ul>
<ul>
<li>Hands-on familiarity with modern data and analytics platforms (Databricks, cloud SQL, BI tools, or data lakes).</li>
</ul>
<ul>
<li>Confidence delivering introductory technical content to non-expert audiences.</li>
</ul>
<ul>
<li>Working knowledge of AI/ML concepts , able to explain how Databricks enables practical use cases.</li>
</ul>
<ul>
<li>Strong communication skills and a consultative approach: discover needs, recommend paths, and gain commitment.</li>
</ul>
<ul>
<li>A data-driven mindset with strong organisational habits and comfort managing many concurrent accounts.</li>
</ul>
<ul>
<li>Team-first attitude , proactive collaborator who knows when to escalate for deeper technical support.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Databricks certifications or willingness to certify (Data Engineer Associate, Databricks certifications (or willingness to obtain within 6 months).</li>
</ul>
<ul>
<li>Background in SaaS, cloud, or data platforms; familiarity with BI or AI/BI tools (Databricks Genie, Tableau, Power BI).</li>
</ul>
<ul>
<li>Exposure to Databricks Apps, REST APIs, or AI agent concepts.</li>
</ul>
<ul>
<li>Experience in a role with enablement or training-related revenue metrics.</li>
</ul>
<p><strong>Why This Role, Why Now</strong></p>
<p>New products create new skill gaps. As Databricks expands into AI/BI, Databricks Apps, and agent-based development, a new wave of users , business analysts, app builders, domain experts , needs to get skilled up quickly. The depth CEA team focuses on the complex, strategic, and deeply technical. This role focuses on the broad middle: high volume, new personas, and the scale-to-commitment motion that turns digital participation into real adoption. It is a high-visibility, high-impact position with a clear growth path into senior CEA work as you build depth and track record.</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 2 Pay Range $86,600-$119,150 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$86,600-$119,150 USD</Salaryrange>
      <Skills>data and analytics platforms, cloud SQL, BI tools, data lakes, AI/ML concepts, Databricks Apps, REST APIs, AI agent concepts, Databricks certifications, SaaS, cloud, data platforms, BI or AI/BI tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8431935002</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cbd81d47-d7e</externalid>
      <Title>Data Platform Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. This position may be offered as Senior Solutions Consultant, Resident Solutions Architect, or Senior Resident Solutions Architect. The final title will align to your experience, technical depth, and customer-facing ownership.</p>
<p>As a Big Data Solutions Architect (Internal Title - Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 10% of the time</li>
</ul>
<p>[Preferred] Databricks Certification but not essential</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8486738002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8efd6b3b-251</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456973002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d94d7ea-9ca</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
</ul>
<ul>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461330002</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ab6e1e39-16f</externalid>
      <Title>Core Account Executive, Nordics (Digital Natives)</Title>
      <Description><![CDATA[<p>Want to help solve the world&#39;s toughest problems with data and AI?</p>
<p>This is what we do every day at Databricks.</p>
<p>As a Core Account Executive (Digital Natives) at Databricks, you will own a focused territory of ~10 large Digital Native spending accounts across the Nordics.</p>
<p>You will assess your territory and develop a successful execution strategy, drive quarter-on-quarter consumption growth in existing accounts, exceed activity and quarterly revenue targets, track all customer details, identify new use case opportunities, promote the value of the Databricks&#39; Data Intelligence Platform and other products, and ensure 100% satisfaction among all customers.</p>
<p>We look for good understanding of the data platform and cloud ecosystems, some exposure to the software industry and understanding of selling SaaS, Data and Business Value, experience growing consumption and closing commit deals in a direct sales role, competent with prospecting research and ability to map out key stakeholders, advanced understanding of MEDDPICC, experience exceeding sales targets/quotas, and a Bachelor&#39;s Degree or relevant work experience.</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit our website.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platform, cloud ecosystems, software industry, SaaS, Data and Business Value, prospecting research, MEDDPICC, sales targets/quotas</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks operates at the leading edge of the Data and AI space, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8460871002</Applyto>
      <Location>Stockholm, Sweden</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3d22e39a-bde</externalid>
      <Title>Data Analyst II</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>
<p>We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>
<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p>Data at Brex</p>
<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations.</p>
<p>Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>
<p>What you’ll do</p>
<p>As a Data Analyst II (DA), you will play a central role in enhancing the operational tracking and reporting capabilities of different business teams across Brex.</p>
<p>You will work closely with Data Scientists, Data Engineers, and partner teams to drive meaningful insights for the business through visualizations, self-service tools, and ad-hoc analyses.</p>
<p>This is a high-impact role in a fast-paced fintech environment where your work will directly influence strategic decisions.</p>
<p>Where you’ll work</p>
<p>This role will be based in our San Francisco office.</p>
<p>We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home.</p>
<p>We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday.</p>
<p>As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities</p>
<p>Apply data visualization and storytelling skills in creating business intelligence solutions (such as Looker and/or Hex dashboards) that enable actionable insights.</p>
<p>Perform ad-hoc analyses and deep dives to investigate business questions, surface trends, and provide data-driven recommendations.</p>
<p>Develop self-service data tools and processes that empower business stakeholders to independently monitor the performance and health of their respective areas.</p>
<p>Collaborate closely with Data Scientists and Data Engineers to identify data sources, enable data pipelines, and support the development of analytical data models that operationalize reports and dashboards.</p>
<p>Implement and maintain rigorous data quality checks to ensure the integrity and robustness of datasets used across dashboards, reports, and analyses.</p>
<p>Partner with various departments,including Sales, Operations, Product, and Finance,to understand their data needs and deliver tailored analyses and reporting that support strategic planning.</p>
<p>Contribute to the automation of recurring analyses and reporting workflows using Python.</p>
<p>Requirements</p>
<p>3+ years of experience in data analytics or a related role in a professional setting.</p>
<p>2+ years of experience working directly with Sales, Operations, Product, or equivalent business teams.</p>
<p>Fluency in SQL to manipulate data and perform complex analyses (CTEs, window functions, joins across large datasets).</p>
<p>Experience with Python for data analysis, automation, or scripting.</p>
<p>Experience with business intelligence and data visualization tools (Looker, Hex, Tableau, or similar).</p>
<p>Strong quantitative and analytical skills with a demonstrated ability to translate data into business insights.</p>
<p>Strong communication skills and the ability to work effectively with stakeholders across different functions and levels of technical fluency.</p>
<p>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</p>
<p>Bonus points</p>
<p>Familiarity with cloud data platforms (e.g., Snowflake, BigQuery, Databricks).</p>
<p>Familiarity with dbt for data modeling and transformation.</p>
<p>Exposure to data pipeline orchestration tools (e.g., Airflow).</p>
<p>Experience in fintech, financial services, or payments.</p>
<p>Comfort operating in a fast-paced, high-growth environment with evolving priorities.</p>
<p>Compensation</p>
<p>The expected salary range for this role is $93,600 - $117,000.</p>
<p>However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>
<p>Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$93,600 - $117,000</Salaryrange>
      <Skills>SQL, Python, Business Intelligence, Data Visualization, Generative AI, LLM-based tools, Cloud data platforms, dbt, Data pipeline orchestration tools, Fintech, Financial services, Payments</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8463696002</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8aca0e87-abb</externalid>
      <Title>Strategic AI/BI Account Executive</Title>
      <Description><![CDATA[<p>We are seeking a Strategic AI/BI Account Executive to help enterprise customers transform how business users interact with data. This high-impact role sits within the AI Go-To-Market team and partners closely with Enterprise Account Executives to drive adoption of Databricks AI/BI and Genie.</p>
<p>You will help organizations move beyond static dashboards to governed, conversational, AI-powered analytics at the center of the convergence of business intelligence, data platforms, and generative AI.</p>
<p>Enterprise analytics is rapidly evolving from dashboards and static reporting to conversational, AI-driven decision platforms. Databricks AI/BI and Genie empower business users to securely interact with governed data using natural language, transforming the data platform into a true decision platform.</p>
<p>If you want to be at the forefront of AI-powered analytics transformation at one of the fastest-growing data and AI companies in the world, this is your opportunity.</p>
<p>The impact you will have:</p>
<ul>
<li>Partner with Enterprise AEs to identify, qualify, and close AI/BI opportunities</li>
<li>Engage C-level, analytics, and line-of-business leaders to modernize analytics strategies</li>
<li>Displace or expand legacy BI platforms with AI-powered, governed analytics solutions</li>
<li>Lead conversations around semantic governance, self-service analytics, and natural language data access</li>
<li>Drive proof-of-value engagements and scale enterprise-wide adoption</li>
<li>Align AI/BI initiatives to measurable business outcomes (productivity, speed to insight, revenue impact)</li>
<li>Enable field teams and serve as a subject matter expert on modern analytics architectures</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Enterprise sales experience in BI, analytics, data platforms, or AI/ML</li>
<li>Strong understanding of modern analytics architectures and data governance</li>
<li>Ability to sell to both technical and business stakeholders</li>
<li>Executive presence and experience navigating complex buying cycles</li>
<li>Passion for AI and the impact of GenAI on enterprise analytics</li>
<li>Experience operating in a specialist or overlay sales model</li>
<li>Ability to translate technical capabilities into clear business value</li>
<li>7+ years of Enterprise Sales experience, exceeding quotas in larger accounts</li>
<li>Bachelors Degree or equivalent experience</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales experience in BI, analytics, data platforms, or AI/ML, Strong understanding of modern analytics architectures and data governance, Ability to sell to both technical and business stakeholders, Executive presence and experience navigating complex buying cycles, Passion for AI and the impact of GenAI on enterprise analytics</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform. It has over 10,000 organizations worldwide as clients.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8441888002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>43ca4523-d1d</externalid>
      <Title>Customer Enablement Specialist</Title>
      <Description><![CDATA[<p>As a Customer Enablement Specialist at Databricks, you will play a critical role in helping customers succeed with our data and AI platform. You will be responsible for managing a broad portfolio of accounts, supporting new and emerging personas, and helping them succeed with Databricks&#39; latest innovations in AI/BI, Databricks Apps, and agent-based development.</p>
<p>Your main objective will be to convert participation in Databricks&#39; scale programs (webinars, workshops, digital learning) into structured training engagements. You will own a high-volume enablement pipeline, identifying learner needs, recommending tailored paths, and tracking adoption progress.</p>
<p>To achieve this, you will deliver engaging L100–L200 sessions and demos to help new personas understand what&#39;s possible with Databricks. You will build enablement plans for each account, tracking trained users, completion rates, and milestone achievement.</p>
<p>You will partner with Customer Success Managers (CSMs), Account Executives (AEs), and senior CEAs to align training with customer goals and renewal cycles. You will report key metrics – trained accounts, learner growth, conversion rates, and training revenue – using data to guide your priorities.</p>
<p>In addition, you will provide structured feedback to program and curriculum teams to sharpen future customer learning experiences.</p>
<p>We are looking for a highly motivated and organized individual with excellent communication skills and a consultative approach. You should have hands-on familiarity with modern data and analytics platforms, confidence delivering introductory technical content to non-expert audiences, and a working knowledge of AI/ML concepts.</p>
<p>If you are passionate about helping customers succeed and have a strong desire to learn and grow with our company, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$86,600-$119,150 USD</Salaryrange>
      <Skills>modern data and analytics platforms, customer-facing role, technical training, pre-sales, enablement, customer success, AI/BI, Databricks Apps, agent-based development, Databricks certifications, SaaS, cloud, data platforms, BI or AI/BI tools, Databricks Genie, Tableau, Power BI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8431927002</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fd64db3e-49f</externalid>
      <Title>Staff Software Engineer – Customer Experience Intelligence (CXI)</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re shaping the future of how customers experience support at scale. As the Staff Technical Lead for Customer Experience Intelligence, you&#39;ll design intelligent, AI-powered systems that make support faster, smarter, and more effortless.</p>
<p>In this role, you&#39;ll have end-to-end ownership of the architecture and technical strategy behind automation and agentic workflows that reduce mean time to mitigate (MTTM), boost quality, and enable our Support organization to scale impact without scaling headcount. You&#39;ll work hands-on with teams across Support, Product, and Platform Engineering to build seamless systems that anticipate customer needs before they arise.</p>
<p>You&#39;ll lead the technical foundation that transforms how customers experience support , where issues are auto-diagnosed, solutions are delivered instantly, and engineers focus their time on the toughest challenges. Your success will mean customers moving faster, trusting Databricks deeper, and feeling the impact of your systems every day.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning the technical vision and architecture for Databricks&#39; Support Automation and Tooling ecosystem</li>
<li>Leading hands-on development of automation to improve customer experience and Support scalability</li>
<li>Driving rapid, iterative development while upholding quality, safety, and reliability standards</li>
<li>Designing agentic workflows that evolve from human-in-the-loop to fully automated systems</li>
<li>Implementing observability, transparency, and rollback mechanisms for AI-driven decisions</li>
<li>Acting as the primary technical interface between Support, Product, and Platform Engineering to align technical roadmaps and unblock dependencies</li>
<li>Setting a high engineering bar for quality, reliability, and maintainability in line with Databricks standards</li>
<li>Mentoring engineers and SMEs across Software and Support Engineering functions</li>
</ul>
<p>We&#39;re looking for someone with:</p>
<ul>
<li>A BS or higher degree in Computer Science or a related field</li>
<li>Technical leadership experience in large projects similar to those described, including automation tooling, distributed systems, and APIs</li>
<li>Extensive full-stack development experience</li>
<li>Proven success designing and deploying production-grade automation in complex technical environments</li>
<li>Hands-on experience with ML-assisted systems, decision support, or agentic automation</li>
<li>Deep familiarity with distributed data platforms, developer tooling, and large-scale infrastructure systems</li>
<li>Understanding of multi-cloud environments (AWS, Azure, GCP), compliance, and security constraints</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range for this role is $190,000-$261,250 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,000-$261,250 USD</Salaryrange>
      <Skills>Automation tooling, Distributed systems, APIs, Full-stack development, ML-assisted systems, Decision support, Agentic automation, Distributed data platforms, Developer tooling, Large-scale infrastructure systems, Multi-cloud environments, Compliance, Security constraints</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and operates the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8416959002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b96ce52c-eaa</externalid>
      <Title>Engineering Manager, Onboarding</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry. We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>
<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p><strong>Engineering</strong></p>
<p>Engineering at Brex is about building systems that scale with speed and intention. Our teams span Software, Data, Security, and IT, and operate with high autonomy and deep collaboration. We tackle hard technical problems, own our outcomes, and push for excellence at every level , from architecture to deployment.</p>
<p>It’s an environment where engineering is a craft, and builders become leaders.</p>
<p><strong>What you’ll do</strong></p>
<p>You will lead an engineering team focused on building the systems and product experiences that power customer activation at Brex, including onboarding, account setup, verifications, and integrations workflows that help customers realize value quickly.</p>
<p>This role requires strategic thinking, operational excellence, technical leadership, and a deep passion for delivering frictionless, AI-enhanced customer journeys.</p>
<p>The ideal candidate is an engineering leader with experience scaling user-facing onboarding systems, delivering high-quality product experiences, and partnering deeply across Product, Design, Operations, and GTM teams.</p>
<p><strong>Where you’ll work</strong></p>
<p>This role will be based in our San Francisco office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday.</p>
<p>As a perk, we also have up to four weeks per year of fully remote work!</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Take an active role in driving business and product strategies, championing a seamless, intuitive, and efficient onboarding experience.</li>
</ul>
<ul>
<li>Collaborate with cross-functional partners across Product, Design, Operations, and Sales to define priorities and deliver delightful customer activation experiences.</li>
</ul>
<ul>
<li>Leverage AI to reimagine and automate onboarding and implementation workflows, improving speed, personalization, and operational leverage.</li>
</ul>
<ul>
<li>Drive execution of the Onboarding roadmap, ensuring timely, high-quality delivery of systems and features that help customers activate and realize value.</li>
</ul>
<ul>
<li>Lead and manage a team of engineers, including hiring, mentoring, performance management, and establishing strong technical direction.</li>
</ul>
<ul>
<li>Drive continuous improvement in engineering processes, technical architecture, and product quality.</li>
</ul>
<ul>
<li>Foster a culture of innovation, collaboration, accountability, and customer obsession across the team.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>
</ul>
<ul>
<li>6+ years of software engineering experience with strong technical depth.</li>
</ul>
<ul>
<li>3+ years of experience managing or leading engineers in a high-growth environment.</li>
</ul>
<ul>
<li>Strong technical background and understanding of software development principles.</li>
</ul>
<ul>
<li>Expertise leading full-stack engineering teams delivering end-to-end product experiences.</li>
</ul>
<ul>
<li>Regularly works with cross-functional partners (e.g. Product, Design, Operations, Sales) and excels in driving alignment across stakeholders.</li>
</ul>
<ul>
<li>Data-driven mindset with the ability to evaluate impact, measure funnel performance, and optimize activation metrics.</li>
</ul>
<ul>
<li>Track record building AI-powered product experiences, including LLM-driven automation and personalization.</li>
</ul>
<p><strong>Bonus points</strong></p>
<ul>
<li>Experience with data platforms such as Snowflake, Hex, or similar.</li>
</ul>
<ul>
<li>Experience building systems related to onboarding, implementation, identity, workflow automation, customer lifecycle products, or other customer-facing experiences.</li>
</ul>
<ul>
<li>You have started your own technology venture or were an early technical founder/employee. We value entrepreneurial spirit &amp; scrappiness!</li>
</ul>
<p><strong>Compensation</strong></p>
<p>The expected salary range for this role is $240,000 - $300,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$240,000 - $300,000</Salaryrange>
      <Skills>Software engineering, Technical leadership, AI-powered product experiences, Data-driven mindset, Cross-functional collaboration, Full-stack engineering, Data platforms, Workflow automation, Customer lifecycle products, LLM-driven automation, Personalization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8461600002</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8664b981-66c</externalid>
      <Title>Data Platform Solutions Architect (Professional Services) - Emerging Enterprise &amp; DNB</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. Depending on experience and scope, this position may be offered as a Senior Solutions Consultant or a Resident Solutions Architect. You may know this role as a Big Data Solutions Architect, Analytics Architect, Data Platform Architect, or Technical Consultant. The final title will align to your experience, technical depth, and customer-facing ownership.</p>
<p>As a Data Platform Solutions Architect on our Professional Services team for the Emerging Enterprise &amp; Digital Natives business in EMEA, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Drive high-impact customer projects: Design and build reference architectures, implement production use cases, and create “how-to” guides tailored to the unique needs of fast-moving Emerging Enterprise &amp; Digital Native customers in EMEA.</li>
</ul>
<ul>
<li>Collaborate on project scoping: Work closely with Engagement Managers and customers to define project scope, schedules, and deliverables for professional services engagements.</li>
</ul>
<ul>
<li>Enable transformational initiatives: Guide strategic customers through their end-to-end big data journeys,migrating from legacy platforms and deploying industry-leading data and AI applications on the Databricks platform.</li>
</ul>
<ul>
<li>Consult on architecture &amp; design: Provide thought leadership on solution design and implementation strategies, ensuring customers can successfully evaluate and adopt Databricks.</li>
</ul>
<ul>
<li>Offer advanced support: Serve as an escalation point for operational issues, collaborating with Databricks Support and Engineering to resolve challenges quickly.</li>
</ul>
<ul>
<li>Align technical delivery: Partner with cross-functional Databricks teams (Technical, PM, Architecture, and Customer Success) to align on milestones, ensuring customer needs and deadlines are met.</li>
</ul>
<ul>
<li>Amplify product feedback: Provide implementation insights to Databricks Product and Support teams, guiding rapid improvements in features and troubleshooting for customers.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 10% of the time</li>
</ul>
<ul>
<li>[Preferred] Databricks Certification but not essential</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8439047002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2afc821d-248</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494149002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34a0bf55-11a</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines</li>
<li>Documentation and white-boarding skills</li>
<li>Experience working with clients and managing conflicts</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461222002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d723067-22d</externalid>
      <Title>Resident Solutions Architect - Healthcare &amp; Life Sciences</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494144002</Applyto>
      <Location>Dallas, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7e5c6f46-bb6</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456975002</Applyto>
      <Location>Dallas, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b4a461d1-b6b</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a company that provides a data and AI platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494128002</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32d8d11d-9dc</externalid>
      <Title>Resident Solutions Architect - Healthcare &amp; Life Sciences</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8371312002</Applyto>
      <Location>New York City, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3e92e8a2-811</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494130002</Applyto>
      <Location>New York City, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6da16fc8-e49</externalid>
      <Title>Product Manager, Trip Quality Merchandising and AI</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced product leader to own the guest-facing quality merchandising experience. As a Product Manager, you will join the team focused on trip quality, building features that close the gap between guest expectations and actual trip experiences.</p>
<p>This team&#39;s work touches every single booking decision on a platform that facilitates hundreds of millions of trips and an incredibly diverse global inventory. You will ship and iterate on AI-powered features that help guests make more confident booking decisions , synthesizing signals from reviews, ratings, and listing data into quality information that is clear, personalized, and trustworthy.</p>
<p>This means working hands-on with Data Science to evaluate model performance and translate outputs into better product decisions, not just defining requirements and handing them off. This role requires close partnership with the Policy, Privacy, and Legal teams , particularly on review integrity, AI-generated content, and trust-related guest-facing features.</p>
<p>You will work especially closely with the quality data platform team to generate and consume accurate quality signals that foster a trusted marketplace, as well as across Design, Engineering, and Data Science to ship and iterate at scale.</p>
<p>This is a critical role that spans the end-to-end guest experience around listing quality information, and the features you build will be part of the experience for every guest who considers booking on Airbnb.</p>
<p>As a Product Manager, you will:</p>
<ul>
<li>Develop and execute the product roadmap for Trip Quality Merchandising and AI, with a focus on shipping AI-powered features that improve how guests evaluate listing quality.</li>
<li>Define product requirements for generative AI features (e.g. listing summaries, quality highlights) and ML ranking systems, including evaluation criteria, guardrails, and iteration plans.</li>
<li>Partner with Data Science to assess model performance , understanding where outputs are accurate and trustworthy, and where they fall short , and translate those assessments into concrete product improvements.</li>
<li>Craft the product narrative that inspires teams, leadership, and the company and builds alignment on the quality merchandising strategy.</li>
<li>Partner with design, engineering, and customer support to deliver features that improve both guest and host experiences.</li>
<li>Collaborate closely with our Policy, Privacy, and Legal teams on topics that are essential to making Airbnb one of the most trusted marketplaces in the world.</li>
</ul>
<p>You will need to have:</p>
<ul>
<li>10+ years of product management experience.</li>
<li>Hands-on track record shipping AI/ML-powered features on consumer products , including generative AI features (LLM-based summarization, content generation) and ML ranking or personalization systems , with direct involvement in defining evaluation criteria and improving model outputs.</li>
<li>Expert ability to use data and business analysis to inform product strategy and drive decisions.</li>
<li>Domain familiarity with marketplace trust/quality, search/ranking/relevance, or content/feedback systems is strongly preferred.</li>
<li>Experience creating product messaging and delivering to customers.</li>
<li>Demonstrated track record of building cross-functional and executive leadership alignment.</li>
<li>Experience working on consumer products with marketplace dynamics on a global scale.</li>
<li>Entrepreneurial track record of taking an idea to reality.</li>
<li>Excited to lead cross-functional execution, from strategy through shipped product in a fast-paced environment.</li>
<li>Desire to do individual contributor product management work.</li>
<li>Excellent written and verbal communication; adept at simplifying complex AI/ML and data concepts for diverse audiences.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$224,000-$280,000 USD</Salaryrange>
      <Skills>product management, AI/ML, generative AI, ML ranking systems, data science, policy, privacy, legal, quality data platform, design, engineering, customer support</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals, founded in 2007 and headquartered in San Francisco.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7651661</Applyto>
      <Location>San Francisco, CA, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>