{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/lakehouse-architecture"},"x-facet":{"type":"skill","slug":"lakehouse-architecture","display":"Lakehouse Architecture","count":8},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_53ee0ef3-c62"},"title":"Staff Data Engineer, Analytics Data Engineering","description":"<p>We are looking for a Staff Data Engineer to join our Analytics Data Engineering (ADE) team within Data Science &amp; AI Platform. As a Staff Data Engineer, you will be responsible for solving cross-cutting data challenges that span multiple lines of business while driving standardization in how we build, deploy, and govern analytics pipelines across Dropbox.</p>\n<p>This is not a maintenance role. We are modernizing our analytics platform, upgrading orchestration infrastructure, building shared and reusable data models with conformed dimensions, establishing a certified metrics framework, and laying the foundation for AI-native data development. You will partner closely with Data Science, Data Infrastructure, Product Engineering, and Business Intelligence teams to make this happen.</p>\n<p>You will play a crucial role in establishing analytics engineering standards, designing scalable data models, and driving cross-functional alignment on data governance. You will get substantial exposure to senior leadership, shape the technical direction of analytics infrastructure at Dropbox, and directly influence how data powers product and business decisions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the design and implementation of shared, reusable data models, defining shared fact tables, conformed dimensions, and a semantic/metrics layer that serves as the single source of truth across analytics functions</li>\n</ul>\n<ul>\n<li>Drive standardization of data engineering practices across ADE and functional analytics teams, including pipeline patterns, CI/CD workflows, naming conventions, and data modeling standards</li>\n</ul>\n<ul>\n<li>Partner with Data Infrastructure to modernize orchestration, improve pipeline decomposition, and establish secure dev/test environments with production data access</li>\n</ul>\n<ul>\n<li>Architect and implement a shift-left data governance strategy, working with upstream data producers to establish data contracts, SLOs, and code-enforced quality gates that catch issues before production</li>\n</ul>\n<ul>\n<li>Collaborate with Data Science leads and Product Management to translate metric definitions into reliable, certified data pipelines that power executive dashboards, WBR reporting, and growth measurement</li>\n</ul>\n<ul>\n<li>Reduce operational burden by improving pipeline granularity, observability, and failure recovery, establishing runbooks and alerting standards that make on-call sustainable</li>\n</ul>\n<ul>\n<li>Evaluate and integrate AI-native tooling into the data development lifecycle, enabling conversational data exploration with guardrails and AI-assisted pipeline development</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>BS degree in Computer Science or related technical field, or equivalent technical experience</li>\n</ul>\n<ul>\n<li>12+ years of experience in data engineering or analytics engineering with increasing scope and technical leadership</li>\n</ul>\n<ul>\n<li>12+ years of SQL experience, including complex analytical queries, window functions, and performance optimization at scale (Spark SQL)</li>\n</ul>\n<ul>\n<li>8+ years of Python development experience, including building and maintaining production data pipelines</li>\n</ul>\n<ul>\n<li>Deep expertise in dimensional data modeling, schema design, and scalable data architecture, with hands-on experience building shared data models across multiple business domains</li>\n</ul>\n<ul>\n<li>Strong experience with orchestration tools (Airflow strongly preferred) and dbt, including pipeline design, scheduling strategies, and failure recovery patterns</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with Databricks (Unity Catalog, Delta Lake) and modern lakehouse architectures</li>\n</ul>\n<ul>\n<li>Experience leading orchestration or platform modernization efforts at scale</li>\n</ul>\n<ul>\n<li>Familiarity with data governance and observability tools such as Atlan, Monte Carlo, Great Expectations, or similar</li>\n</ul>\n<ul>\n<li>Experience building or contributing to a metrics/semantic layer (dbt MetricFlow, Databricks Metric Views, or equivalent)</li>\n</ul>\n<ul>\n<li>Track record of establishing data engineering standards and best practices in a federated analytics organization</li>\n</ul>\n<p>Compensation:</p>\n<p>US Zone 2 $198,900-$269,100 USD</p>\n<p>US Zone 3 $176,800-$239,200 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_53ee0ef3-c62","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dropbox","sameAs":"https://www.dropbox.com/","logo":"https://logos.yubhub.co/dropbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dropbox/jobs/7595183","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$198,900-$269,100 USD","x-skills-required":["SQL","Python","Dimensional data modeling","Schema design","Scalable data architecture","Orchestration tools","dbt"],"x-skills-preferred":["Databricks","Modern lakehouse architectures","Data governance and observability tools","Metrics/semantic layer"],"datePosted":"2026-04-18T15:56:35.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US: Select locations"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Dimensional data modeling, Schema design, Scalable data architecture, Orchestration tools, dbt, Databricks, Modern lakehouse architectures, Data governance and observability tools, Metrics/semantic layer","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198900,"maxValue":269100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e68e5c3b-1e2"},"title":"Lakebase Account Executive","description":"<p>We are seeking a Lakebase Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>\n<p>As a Lakebase Account Executive, you will drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</p>\n<p>You will lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</p>\n<p>You will sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</p>\n<p>You will run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</p>\n<p>You will orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</p>\n<p>You will compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</p>\n<p>You will align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</p>\n<p>You will partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</p>\n<p>You will enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</p>\n<p>This role requires the ability to operate across two key motions simultaneously:</p>\n<p>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</p>\n<p>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</p>\n<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>\n<p>Success in this role requires strength in four areas:</p>\n<p>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</p>\n<p>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</p>\n<p>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</p>\n<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>\n<p>The interview process is designed to evaluate candidates across all four of these dimensions.</p>\n<p>We are looking for a candidate with 7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</p>\n<p>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</p>\n<p>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</p>\n<p>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</p>\n<p>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</p>\n<p>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</p>\n<p>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</p>\n<p>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</p>\n<p>Bachelor’s degree or equivalent practical experience.</p>\n<p>Preferred qualifications include experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</p>\n<p>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</p>\n<p>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</p>\n<p>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</p>\n<p>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</p>\n<p>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e68e5c3b-1e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8449848002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Postgres","operational databases","OLTP workloads","transactional cloud database services","data platforms","lakehouse architectures","cloud ecosystems","reverse ETL","real-time decisioning","operational analytics","AI-native applications","agent-driven applications","low-latency","highly scalable operational data services"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:06.106Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Postgres, operational databases, OLTP workloads, transactional cloud database services, data platforms, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics, AI-native applications, agent-driven applications, low-latency, highly scalable operational data services"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_02ba8342-079"},"title":"Specialist Solutions Architect - Data Warehousing (Healthcare & Life Sciences)","description":"<p>As a Specialist Solutions Architect (SSA) - Data Warehousing, you will guide customers in their cloud data warehousing transformation with Databricks. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with large-scale data warehousing technologies and lakehouse architecture.</p>\n<p>The SSA helps customers through evaluations and successful production planning for their business intelligence workloads while aligning their technical roadmap for the Databricks Data Intelligence Platform.</p>\n<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in the data warehousing specialty - including performance tuning, data modeling, winning evaluations, architecture design, and production migration planning.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Provide technical leadership to guide strategic customers to successful cloud transformations on large-scale data warehousing workloads - ranging from evaluation to architecture design to production deployment</li>\n<li>Prove the value of the Databricks Intelligence Platform for customer workloads by architecting production workloads, including end-to-end pipeline load performance testing and optimization</li>\n<li>Become a technical expert in an area such as data warehousing evaluations or helping set up successful workload migrations</li>\n<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing and performance, and tuning workloads for production</li>\n<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>\n<li>Contribute to the Databricks Community</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years experience in a technical role with expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, data governance, debugging MPP data warehouses or other big data solutions, or migration workloads from EDW other systems</li>\n<li>Experience with design and implementation of data warehousing technologies including relational databases, SQL, data analytics, NoSQL, MPP, OLTP, and OLAP</li>\n<li>Deep Specialty Expertise in at least one of the following areas:</li>\n</ul>\n<p>+ Experience scaling large analytical data workloads in the cloud that are performant and cost-effective \t+ Maintained, extended, or migrated a production data warehouse system to evolve with complex needs, including data modeling, data governance needs, and integration with business intelligence tools \t+ Experience migrating on-premise EDW workloads to the public cloud</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Production programming experience in SQL and Python, Scala, or Java</li>\n<li>Experience with the AWS, Azure, or GCP clouds</li>\n<li>2 years professional experience with data warehousing and big data technologies (Ex: SQL, Redshift, SAP, Synapse, EMR, OLAP &amp; OLTP workloads)</li>\n<li>2 years customer-facing experience in a pre-sales or post-sales role</li>\n<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>\n<li>Can travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_02ba8342-079","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8337429002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["data warehousing","cloud data warehousing","Databricks","lakehouse architecture","SQL","Python","Scala","Java","AWS","Azure","GCP","data analytics","NoSQL","MPP","OLTP","OLAP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:06.778Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data warehousing, cloud data warehousing, Databricks, lakehouse architecture, SQL, Python, Scala, Java, AWS, Azure, GCP, data analytics, NoSQL, MPP, OLTP, OLAP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d6421dea-6e3"},"title":"Strategic Hunter Account Executive - Lakebase","description":"<p>We are seeking a Strategic Hunter Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>\n<p>This high-impact role sits within the Lakebase Go-To-Market team and partners closely with regional Account Executives to drive adoption of Lakebase with platform, application, and data teams.</p>\n<p>Lakebase gives customers a unified, governed foundation for operational workloads and AI-native applications, helping them move away from a fragmented estate of point databases toward a modern, scalable, serverless Postgres service.</p>\n<p>If you want to be at the forefront of operational databases for AI and intelligent applications at one of the fastest-growing data and AI companies in the world, this is your opportunity.</p>\n<p><strong>The impact you will have</strong></p>\n<ul>\n<li>Drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</li>\n</ul>\n<ul>\n<li>Lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</li>\n</ul>\n<ul>\n<li>Sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</li>\n</ul>\n<ul>\n<li>Run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</li>\n</ul>\n<ul>\n<li>Orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</li>\n</ul>\n<ul>\n<li>Compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</li>\n</ul>\n<ul>\n<li>Align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</li>\n</ul>\n<ul>\n<li>Partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</li>\n</ul>\n<ul>\n<li>Enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</li>\n</ul>\n<p><strong>What success looks like in this role</strong></p>\n<p>This role requires the ability to operate across two key motions simultaneously:</p>\n<ul>\n<li>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</li>\n</ul>\n<ul>\n<li>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</li>\n</ul>\n<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>\n<p>Success in this role requires strength in four areas:</p>\n<ul>\n<li>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</li>\n</ul>\n<ul>\n<li>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</li>\n</ul>\n<ul>\n<li>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</li>\n</ul>\n<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>\n<p><strong>What we look for</strong></p>\n<ul>\n<li>7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</li>\n</ul>\n<ul>\n<li>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</li>\n</ul>\n<ul>\n<li>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</li>\n</ul>\n<ul>\n<li>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</li>\n</ul>\n<ul>\n<li>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</li>\n</ul>\n<ul>\n<li>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</li>\n</ul>\n<ul>\n<li>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</li>\n</ul>\n<ul>\n<li>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</li>\n</ul>\n<ul>\n<li>Bachelor’s degree or equivalent practical experience.</li>\n</ul>\n<p><strong>Preferred qualifications</strong></p>\n<ul>\n<li>Experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</li>\n</ul>\n<ul>\n<li>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</li>\n</ul>\n<ul>\n<li>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</li>\n</ul>\n<ul>\n<li>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</li>\n</ul>\n<ul>\n<li>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</li>\n</ul>\n<ul>\n<li>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p><strong>Our Commitment to Diversity and Inclusion</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d6421dea-6e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8477547002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data platforms","operational databases","Postgres","MySQL","cloud-native DBaaS","data/AI infrastructure","technical buyers","business leaders","modern data and application architectures","cloud-native services","microservices","event-driven systems","AI and analytics strategies","technical stakeholders","business stakeholders","value selling skills","discovering pain","building a business case","quantified outcomes","communication","storytelling","negotiation skills"],"x-skills-preferred":["OLTP workloads","transactional cloud database services","lakehouse architectures","cloud ecosystems","reverse ETL","real-time decisioning","operational analytics use cases","AI-native applications","agent-driven applications","high-growth environments","category-creating environments","partner collaborations","ISV collaborations"],"datePosted":"2026-04-18T15:52:47.849Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India; Mumbai, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data platforms, operational databases, Postgres, MySQL, cloud-native DBaaS, data/AI infrastructure, technical buyers, business leaders, modern data and application architectures, cloud-native services, microservices, event-driven systems, AI and analytics strategies, technical stakeholders, business stakeholders, value selling skills, discovering pain, building a business case, quantified outcomes, communication, storytelling, negotiation skills, OLTP workloads, transactional cloud database services, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics use cases, AI-native applications, agent-driven applications, high-growth environments, category-creating environments, partner collaborations, ISV collaborations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_47807ca3-e36"},"title":"Strategic AI/BI Account Executive","description":"<p>We are seeking a Strategic AI/BI Account Executive to help enterprise customers transform how business users interact with data. This high-impact role sits within the AI Go-To-Market team and partners closely with Enterprise Account Executives to drive adoption of Databricks AI/BI and Genie in APJ.</p>\n<p>You will help organisations move beyond static dashboards to governed, conversational, AI-powered analytics at the centre of the convergence of business intelligence, data platforms, and generative AI. Enterprise analytics is rapidly evolving from dashboards and static reporting to conversational, AI-driven decision platforms. Databricks AI/BI and Genie empower business users to securely interact with governed data using natural language, transforming the data platform into a true decision platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Partner with Enterprise AEs to identify, qualify, and close AI/BI opportunities</li>\n<li>Engage C-level, analytics, and line-of-business leaders to modernise analytics strategies</li>\n<li>Displace or expand legacy BI platforms with AI-powered, governed analytics solutions</li>\n<li>Lead conversations around semantic governance, self-service analytics, and natural language data access</li>\n<li>Drive proof-of-value engagements and scale enterprise-wide adoption</li>\n<li>Align AI/BI initiatives to measurable business outcomes (productivity, speed to insight, revenue impact)</li>\n<li>Enable field teams and serve as a subject matter expert on modern analytics architectures</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Enterprise sales experience in BI, analytics, data platforms, or AI/ML</li>\n<li>Strong understanding of modern analytics architectures and data governance</li>\n<li>Ability to sell to both technical and business stakeholders</li>\n<li>Executive presence and experience navigating complex buying cycles</li>\n<li>Passion for AI and the impact of GenAI on enterprise analytics</li>\n<li>Experience operating in a specialist or overlay sales model</li>\n<li>Ability to translate technical capabilities into clear business value</li>\n<li>7+ years of Enterprise Sales experience, exceeding quotas in larger accounts</li>\n</ul>\n<p>Preferred qualifications include:</p>\n<ul>\n<li>Experience with modern BI platforms such as Tableau, Power BI, Looker, or ThoughtSpot</li>\n<li>Familiarity with semantic layers, metrics stores, or governed data models</li>\n<li>Understanding of lakehouse architectures and cloud data platforms</li>\n<li>Exposure to GenAI, natural language interfaces, or conversational applications</li>\n<li>Consulting or solution design experience in customer-facing roles</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_47807ca3-e36","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8441884002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Enterprise sales experience in BI, analytics, data platforms, or AI/ML","Strong understanding of modern analytics architectures and data governance","Ability to sell to both technical and business stakeholders","Executive presence and experience navigating complex buying cycles","Passion for AI and the impact of GenAI on enterprise analytics"],"x-skills-preferred":["Experience with modern BI platforms such as Tableau, Power BI, Looker, or ThoughtSpot","Familiarity with semantic layers, metrics stores, or governed data models","Understanding of lakehouse architectures and cloud data platforms","Exposure to GenAI, natural language interfaces, or conversational applications","Consulting or solution design experience in customer-facing roles"],"datePosted":"2026-04-18T15:52:23.856Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Enterprise sales experience in BI, analytics, data platforms, or AI/ML, Strong understanding of modern analytics architectures and data governance, Ability to sell to both technical and business stakeholders, Executive presence and experience navigating complex buying cycles, Passion for AI and the impact of GenAI on enterprise analytics, Experience with modern BI platforms such as Tableau, Power BI, Looker, or ThoughtSpot, Familiarity with semantic layers, metrics stores, or governed data models, Understanding of lakehouse architectures and cloud data platforms, Exposure to GenAI, natural language interfaces, or conversational applications, Consulting or solution design experience in customer-facing roles"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eda5b2b8-a68"},"title":"Senior Solutions Architect - AI/BI","description":"<p>We are seeking a Senior Solutions Architect - AI/BI to join our Field Engineering team in London. The successful candidate will be responsible for executing on Databricks&#39; strategic Product Operating Model, providing enhanced focus on earlier stage, highly prioritized product lines to establish product market fit and set the course for rapid revenue growth.</p>\n<p>As a Senior Solutions Architect - AI/BI, you will work in partnership with direct account teams to jointly engage clients, foster necessary relationships, position in-depth the specific product line, and provide compelling reasons for clients to adopt and grow the usage of the given product.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Collaborating with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</li>\n<li>Serving as a trusted advisor, expert Solutions Architect, and champion, building technical credibility with stakeholders to drive product adoption and vision.</li>\n<li>Enabling clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</li>\n<li>Influencing product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams.</li>\n</ul>\n<p>To succeed in this role, you will need:</p>\n<ul>\n<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>\n<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>\n<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes.</li>\n<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>\n<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>\n<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>\n</ul>\n<p>If you are a motivated and experienced professional with a passion for data and AI, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eda5b2b8-a68","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8407183002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Experience in designing and delivering cloud-based Data Visualisation and Analytics Solutions","Ability to advise customers in lakehouse analytics architecture","Certification and/or demonstrated competence in data visualisation and analytics systems along with one of Azure, AWS or GCP cloud providers","Demonstrated competence in the Lakehouse architecture including hands-on experience with Apache Spark, Python and SQL"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:38.084Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Experience in designing and delivering cloud-based Data Visualisation and Analytics Solutions, Ability to advise customers in lakehouse analytics architecture, Certification and/or demonstrated competence in data visualisation and analytics systems along with one of Azure, AWS or GCP cloud providers, Demonstrated competence in the Lakehouse architecture including hands-on experience with Apache Spark, Python and SQL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5b743bb-d8f"},"title":"Product Manager, AI Platforms","description":"<p>The AI Platform Product Manager will drive the strategy and execution of Shield AI&#39;s next-generation autonomy intelligence stack. This PM owns the product vision and roadmap for the Hivemind AI Platform, ensuring we can manufacture, govern, and field advanced world models, robotics foundation models, and vision-language-action systems safely and at scale.</p>\n<p>This role sits at the intersection of AI/ML, autonomy, model lifecycle, infrastructure, and product strategy. The PM partners closely with engineering, AI research, Hivemind Solutions, and field teams to deliver the tooling that enables sovereign autonomy, AI Factories at the edge, and continuous learning,capabilities that are central to Shield AI&#39;s strategic direction.</p>\n<p>This is a high-impact role for an experienced product leader excited to define how foundation models are trained, validated, governed, and deployed across thousands of autonomous systems in highly contested environments.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>AI Model Development &amp; Training Platform</li>\n</ul>\n<p>Own the roadmap for foundation model training workflows, including dataset ingestion, curation, labeling, synthetic data generation, domain model training, and distillation pipelines. Define requirements for world models, robotics models, and VLA-based training, evaluation, and specialization. Lead the evolution of MLOps capabilities in Forge, including data lineage, experiment tracking, model versioning, and scalable evaluation suites.</p>\n<ul>\n<li>Data, Simulation &amp; Synthetic Data Factory</li>\n</ul>\n<p>Define product requirements for synthetic data generation, simulation-integrated data flywheels, and automated scenario generation. Partner with Digital Twin, Simulation, and autonomy teams to convert natural-language mission inputs into data needs, training procedures, and model variants.</p>\n<ul>\n<li>Safe Deployment &amp; Model Governance</li>\n</ul>\n<p>Lead the development of model governance and auditability tooling, including model cards, dataset rights, lineage tracking, safety gates, and compliance evidence. Build guardrails and workflows to safely deploy models onto edge hardware in disconnected, GPS- or comms-denied environments. Partner with Safety, Certification, Cyber, and Engineering teams to ensure traceability and evaluation pipelines meet operational and accreditation requirements.</p>\n<ul>\n<li>Edge Deployment &amp; AI Factory Integration</li>\n</ul>\n<p>Partner with Pilot, EdgeOS, and hardware teams to integrate foundation-model-based perception and reasoning into autonomy behaviors. Define requirements for distillation, quantization, and inference tooling as part of the “three-computer” development and deployment model. Ensure closed-loop workflows between cloud model training and edge-native execution.</p>\n<ul>\n<li>Cross-Functional Leadership</li>\n</ul>\n<p>Collaborate with Engineering, Research, Product, Customer Engagement, and Solutions teams to ensure model outputs meet mission and platform constraints. Translate advanced AI capabilities into intuitive workflows that platform OEMs and partner nations can use to build sovereign AI factories. Sequence foundational capabilities that unblock autonomy, simulation, and customer-facing product teams.</p>\n<ul>\n<li>User &amp; Customer Impact</li>\n</ul>\n<p>Develop deep empathy for ML engineers, autonomy developers, and Solutions engineers who rely on the platform. Capture operational data gaps, mission-driven model needs, and domain-specific specialization requirements. Lead demos and onboarding for model-development capabilities across internal and external teams.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5b743bb-d8f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Shield AI","sameAs":"https://www.shield.ai","logo":"https://logos.yubhub.co/shield.ai.png"},"x-apply-url":"https://jobs.lever.co/shieldai/7886f437-2d5e-4616-8dcb-3dc488f1f585","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,000 - $290,000 a year","x-skills-required":["AI Model Development & Training Platform","Data, Simulation & Synthetic Data Factory","Safe Deployment & Model Governance","Edge Deployment & AI Factory Integration","Cross-Functional Leadership","User & Customer Impact","Strong engineering background","Deep understanding of foundation models, robotics models, multimodal models, MLOps, and training infrastructure","Experience managing complex products spanning data pipelines, cloud training clusters, model governance, and edge deployments","Proven success partnering with research teams to transition ML innovations into stable, production-grade workflows"],"x-skills-preferred":["Experience working on autonomy, robotics, embedded AI, or mission-critical systems","Hands-on familiarity with GPU infrastructure, distributed training, or data lakehouse architectures","Experience supporting defense, dual-use, or safety-critical AI systems","Background designing or operating AI Factory–style pipelines (data → training → evaluation → distillation → edge deployment)","Advanced degree in engineering, ML/AI, robotics, or a related field"],"datePosted":"2026-04-17T13:02:54.419Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Diego"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI Model Development & Training Platform, Data, Simulation & Synthetic Data Factory, Safe Deployment & Model Governance, Edge Deployment & AI Factory Integration, Cross-Functional Leadership, User & Customer Impact, Strong engineering background, Deep understanding of foundation models, robotics models, multimodal models, MLOps, and training infrastructure, Experience managing complex products spanning data pipelines, cloud training clusters, model governance, and edge deployments, Proven success partnering with research teams to transition ML innovations into stable, production-grade workflows, Experience working on autonomy, robotics, embedded AI, or mission-critical systems, Hands-on familiarity with GPU infrastructure, distributed training, or data lakehouse architectures, Experience supporting defense, dual-use, or safety-critical AI systems, Background designing or operating AI Factory–style pipelines (data → training → evaluation → distillation → edge deployment), Advanced degree in engineering, ML/AI, robotics, or a related field","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_015e5c6d-a31"},"title":"Senior Data Engineer","description":"<p><strong>Why Valvoline Global Operations?</strong></p>\n<p>At Valvoline Global Operations, we&#39;re proud to be The Original Motor Oil, but we&#39;ve never rested on being first. Founded in 1866, we introduced the world&#39;s first branded motor oil, staking our claim as a pioneer in the automotive and industrial solutions industry.</p>\n<p><strong>Job Purpose</strong></p>\n<p>We are seeking a highly skilled and motivated Data Engineer to join our growing data and analytics team. The ideal candidate will have strong experience designing and developing scalable data pipelines, integrating complex systems, and optimizing data workflows. Proficiency in Databricks and SAP Datasphere is preferred, as these platforms are central to our data ecosystem.</p>\n<p><strong>How You Make an Impact (Job Accountabilities)</strong></p>\n<ul>\n<li>Design, build, and maintain robust, scalable, and high-performance data pipelines using Databricks and SAP Datasphere.</li>\n<li>Collaborate with data architects, analysts, data scientists, and business stakeholders to gather requirements and deliver data solutions aligned with stakeholders&#39; goals.</li>\n<li>Integrate diverse data sources (e.g., SAP, APIs, flat files, cloud storage) into the enterprise data platforms</li>\n<li>Ensure high standards of data quality and implement data governance practices. Stay current with emerging trends and technologies in cloud computing, big data, and data engineering.</li>\n<li>Provide ongoing support for the platform, troubleshoot any issues that arise, and ensure high availability and reliability of data infrastructure.</li>\n<li>Create documentation for the platform infrastructure and processes, and train other team members or users in platform effectively.</li>\n</ul>\n<p><strong>What You Bring to the Role (Job Qualifications / Education / Skills / Requirements / Capabilities)</strong></p>\n<ul>\n<li>Bachelor&#39;s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.</li>\n<li>5-7+ years of experience in a data engineering or related role.</li>\n<li>Strong knowledge of data engineering principles, data warehousing concepts, and modern data architecture.</li>\n<li>Proficiency in SQL and at least one programming language (e.g., Python, Scala).</li>\n<li>Experience with cloud platforms (e.g., Azure, AWS, or GCP), particularly in data services.</li>\n<li>Familiarity with data orchestration tools (e.g., PySpark, Airflow, Azure Data Factory) and CI/CD pipelines.</li>\n</ul>\n<p><strong>Competencies Desired</strong></p>\n<ul>\n<li>Hands-on experience with Databricks (including Spark/PySpark, Delta Lake, MLflow, Unity Catalog, etc.).</li>\n<li>Practical experience working with SAP Datasphere (or SAP Data Warehouse Cloud) in data modeling and data integration scenarios.</li>\n<li>SAP BW or SAP HANA experience is a plus.</li>\n<li>Experience with BI tools like Power BI or Tableau.</li>\n<li>Understanding of data governance frameworks and data security best practices.</li>\n<li>Exposure to data lakehouse architecture and real-time streaming data pipelines.</li>\n<li>Certifications in Databricks, SAP, or cloud platforms are advantageous.</li>\n</ul>\n<p><strong>Working Conditions / Physical Requirements / Travel Requirements</strong></p>\n<ul>\n<li>Normal Office environment.</li>\n<li>Prolonged periods of computer use and frequent participation in meetings</li>\n<li>Occasional walking, standing, and light lifting (up to 10 lbs)</li>\n</ul>\n<ul>\n<li>Minimal travel required.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_015e5c6d-a31","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Valvoline Global Operations","sameAs":"https://jobs.valvolineglobal.com","logo":"https://logos.yubhub.co/jobs.valvolineglobal.com.png"},"x-apply-url":"https://jobs.valvolineglobal.com/job/Senior-Data-Engineer/1316654400/","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","Databricks","SAP Datasphere","SQL","Python","Scala","cloud platforms","data orchestration tools","CI/CD pipelines"],"x-skills-preferred":["Databricks","SAP Datasphere","SAP BW","SAP HANA","Power BI","Tableau","data governance frameworks","data security best practices","data lakehouse architecture","real-time streaming data pipelines"],"datePosted":"2026-03-08T22:14:37.507Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"data engineering, Databricks, SAP Datasphere, SQL, Python, Scala, cloud platforms, data orchestration tools, CI/CD pipelines, Databricks, SAP Datasphere, SAP BW, SAP HANA, Power BI, Tableau, data governance frameworks, data security best practices, data lakehouse architecture, real-time streaming data pipelines"}]}