{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/oltp"},"x-facet":{"type":"skill","slug":"oltp","display":"Oltp","count":10},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ad5c420d-b2d"},"title":"Senior Solutions Architect - Lakebase","description":"<p>The Solutions Architect (Lakebase) team executes on Databricks&#39; strategic Product Operating Model that provides enhanced focus on earlier stage, highly prioritised product lines in order to establish product market fit, and set the course for rapid revenue growth.</p>\n<p>They are part of a global go-to-market team mandate, though individually will cover a specific, local region. Clients may span across one or more business units and verticals.</p>\n<p>By working in partnership with direct account teams, they will jointly engage clients, foster the necessary relationships, position in-depth the specific product line, so as to provide compelling reasons for clients to adopt and grow the usage of the given product.</p>\n<p>The Solutions Architect (Lakebase) is paired with an Account Executive aligned to a given product line with specific targets accordingly. Together, they will devise and implement a strategy across their assigned set of accounts, develop presentations, demos and other assets and deliver them such that clients make an informed decision as they decide to adopt the product-line in a meaningful way.</p>\n<p>The Lakebase product-line requires the following core technical competencies:</p>\n<ul>\n<li>10+ years of transactional database (OLTP) expertise across engineering, product development, administration, and pre-sales, with a proven track record of designing and delivering client-facing solutions.</li>\n<li>Credibility in influencing OLTP products with the market insight needed to shape and prioritise roadmap capabilities.</li>\n<li>Experience architecting solutions that integrate transactional data systems within broader Big Data, Lakehouse, and AI ecosystems.</li>\n<li>Infrastructure, platform and administration expertise around disaster recovery, high availability, backup and recovery, scale-out methods, identity and security management, migrations (vendor-to-vendor, on-prem to cloud)</li>\n</ul>\n<p>Impact</p>\n<p>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</p>\n<p>As a trusted advisor, serve as an expert Solutions Architect and &quot;champion,&quot; building technical credibility with stakeholders to drive product adoption and vision.</p>\n<p>Enable clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</p>\n<p>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams</p>\n<p>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical environments.</p>\n<p>Competencies &amp; Responsibilities</p>\n<ul>\n<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>\n<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>\n<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organisations to drive customer outcomes.</li>\n<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>\n<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>\n<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>\n<li>Undergraduate degree (or higher) in a technical field such as Computer Science, Applied Mathematics, Engineering or similar.</li>\n<li>A track record of driving complex projects to completion in fast-paced, customer-facing environments.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ad5c420d-b2d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8407181002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Transactional database (OLTP)","Cloud infrastructure","Data engineering","Data warehousing","AI","ML","Governance","Transactional systems","App development","Streaming"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:07.817Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Transactional database (OLTP), Cloud infrastructure, Data engineering, Data warehousing, AI, ML, Governance, Transactional systems, App development, Streaming"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9bb1344c-662"},"title":"Sr. Solutions Engineer, Retail - CPG","description":"<p>We are looking for a Senior Solutions Engineer to join our team. As a Senior Solutions Engineer, you will work with large enterprises in the Retail and CPG space to help them become more data-driven. You will define and direct the technical strategy for our largest and most important accounts, leading to more widespread use of our products and wider and deeper adoption of ML &amp; AI.</p>\n<p>You will work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives. You will also work with a team of engineers to build proofs of concept and demonstrate our products.</p>\n<p>The ideal candidate will have a strong background in value selling, technical account management, and technical leadership. They will also have a solid understanding of big data, data science, and cloud technologies.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Define and direct the technical strategy for our largest and most important accounts</li>\n<li>Work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives</li>\n<li>Collaborate with a team of engineers to build proofs of concept and demonstrate our products</li>\n<li>Provide technical guidance and support to customers</li>\n<li>Work with customers to identify and address technical issues</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience working with large enterprises in the Retail and CPG space</li>\n<li>3+ years of experience in a pre-sales capacity or supporting sales activity</li>\n<li>Strong background in value selling, technical account management, and technical leadership</li>\n<li>Solid understanding of big data, data science, and cloud technologies</li>\n<li>Experience with design and implementation of big data technologies such as Hadoop, NoSQL, MPP, OLTP, and OLAP</li>\n<li>Production programming experience in Python, R, Scala, or Java</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9bb1344c-662","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7507778002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["big data","data science","cloud technologies","Hadoop","NoSQL","MPP","OLTP","OLAP","Python","R","Scala","Java"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:57:56.592Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Illinois"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data, data science, cloud technologies, Hadoop, NoSQL, MPP, OLTP, OLAP, Python, R, Scala, Java, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e68e5c3b-1e2"},"title":"Lakebase Account Executive","description":"<p>We are seeking a Lakebase Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>\n<p>As a Lakebase Account Executive, you will drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</p>\n<p>You will lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</p>\n<p>You will sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</p>\n<p>You will run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</p>\n<p>You will orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</p>\n<p>You will compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</p>\n<p>You will align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</p>\n<p>You will partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</p>\n<p>You will enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</p>\n<p>This role requires the ability to operate across two key motions simultaneously:</p>\n<p>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</p>\n<p>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</p>\n<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>\n<p>Success in this role requires strength in four areas:</p>\n<p>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</p>\n<p>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</p>\n<p>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</p>\n<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>\n<p>The interview process is designed to evaluate candidates across all four of these dimensions.</p>\n<p>We are looking for a candidate with 7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</p>\n<p>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</p>\n<p>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</p>\n<p>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</p>\n<p>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</p>\n<p>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</p>\n<p>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</p>\n<p>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</p>\n<p>Bachelor’s degree or equivalent practical experience.</p>\n<p>Preferred qualifications include experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</p>\n<p>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</p>\n<p>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</p>\n<p>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</p>\n<p>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</p>\n<p>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e68e5c3b-1e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8449848002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Postgres","operational databases","OLTP workloads","transactional cloud database services","data platforms","lakehouse architectures","cloud ecosystems","reverse ETL","real-time decisioning","operational analytics","AI-native applications","agent-driven applications","low-latency","highly scalable operational data services"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:06.106Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Postgres, operational databases, OLTP workloads, transactional cloud database services, data platforms, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics, AI-native applications, agent-driven applications, low-latency, highly scalable operational data services"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_02ba8342-079"},"title":"Specialist Solutions Architect - Data Warehousing (Healthcare & Life Sciences)","description":"<p>As a Specialist Solutions Architect (SSA) - Data Warehousing, you will guide customers in their cloud data warehousing transformation with Databricks. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with large-scale data warehousing technologies and lakehouse architecture.</p>\n<p>The SSA helps customers through evaluations and successful production planning for their business intelligence workloads while aligning their technical roadmap for the Databricks Data Intelligence Platform.</p>\n<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in the data warehousing specialty - including performance tuning, data modeling, winning evaluations, architecture design, and production migration planning.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Provide technical leadership to guide strategic customers to successful cloud transformations on large-scale data warehousing workloads - ranging from evaluation to architecture design to production deployment</li>\n<li>Prove the value of the Databricks Intelligence Platform for customer workloads by architecting production workloads, including end-to-end pipeline load performance testing and optimization</li>\n<li>Become a technical expert in an area such as data warehousing evaluations or helping set up successful workload migrations</li>\n<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing and performance, and tuning workloads for production</li>\n<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>\n<li>Contribute to the Databricks Community</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years experience in a technical role with expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, data governance, debugging MPP data warehouses or other big data solutions, or migration workloads from EDW other systems</li>\n<li>Experience with design and implementation of data warehousing technologies including relational databases, SQL, data analytics, NoSQL, MPP, OLTP, and OLAP</li>\n<li>Deep Specialty Expertise in at least one of the following areas:</li>\n</ul>\n<p>+ Experience scaling large analytical data workloads in the cloud that are performant and cost-effective \t+ Maintained, extended, or migrated a production data warehouse system to evolve with complex needs, including data modeling, data governance needs, and integration with business intelligence tools \t+ Experience migrating on-premise EDW workloads to the public cloud</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Production programming experience in SQL and Python, Scala, or Java</li>\n<li>Experience with the AWS, Azure, or GCP clouds</li>\n<li>2 years professional experience with data warehousing and big data technologies (Ex: SQL, Redshift, SAP, Synapse, EMR, OLAP &amp; OLTP workloads)</li>\n<li>2 years customer-facing experience in a pre-sales or post-sales role</li>\n<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>\n<li>Can travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_02ba8342-079","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8337429002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["data warehousing","cloud data warehousing","Databricks","lakehouse architecture","SQL","Python","Scala","Java","AWS","Azure","GCP","data analytics","NoSQL","MPP","OLTP","OLAP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:06.778Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data warehousing, cloud data warehousing, Databricks, lakehouse architecture, SQL, Python, Scala, Java, AWS, Azure, GCP, data analytics, NoSQL, MPP, OLTP, OLAP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d4ebd626-2bf"},"title":"Staff+ Software Engineer, Databases","description":"<p>We&#39;re looking for experienced engineers to build and scale the database infrastructure that powers both Claude&#39;s product offerings and Anthropic&#39;s research initiatives.</p>\n<p>As a Software Engineer on the Databases team, you will architect and operate database systems that both enable millions of users to interact with Claude and support cutting-edge AI research.</p>\n<p>This is a unique opportunity to tackle database challenges at unprecedented scale. You&#39;ll develop the database strategy for Anthropic, design systems that handle billions of API requests, create storage solutions that work seamlessly across GCP, AWS, and diverse deployment models, and build the reliable data layer that accelerates research experimentation.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Drive the technical direction for database solutions used across Product and Research</li>\n</ul>\n<ul>\n<li>Design and implement database solutions that scale to support millions of users across Claude&#39;s product ecosystem</li>\n</ul>\n<ul>\n<li>Build and scale database systems through 100x+ growth while maintaining reliability and performance</li>\n</ul>\n<ul>\n<li>Architect data storage solutions that work seamlessly across GCP, AWS, first-party deployments, third-party deployments, and other environments</li>\n</ul>\n<ul>\n<li>Develop database infrastructure that serves both product and research workloads with different performance characteristics</li>\n</ul>\n<ul>\n<li>Partner with product and research teams to understand data requirements and build infrastructure that accelerates innovation</li>\n</ul>\n<ul>\n<li>Optimize database performance, reliability, and cost efficiency at massive scale</li>\n</ul>\n<ul>\n<li>Make critical build vs. buy decisions for database technologies</li>\n</ul>\n<p>You might be a good fit if you:</p>\n<ul>\n<li>Have 10+ years of experience in a Software Engineer role, building and scaling database systems</li>\n</ul>\n<ul>\n<li>Have 3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead</li>\n</ul>\n<ul>\n<li>Possess deep expertise in distributed database architectures and OLTP systems at scale</li>\n</ul>\n<ul>\n<li>Have successfully scaled databases through massive growth at high-growth companies</li>\n</ul>\n<ul>\n<li>Can balance the speed of a startup environment with the reliability needs of production systems</li>\n</ul>\n<ul>\n<li>Excel at technical leadership and cross-functional collaboration</li>\n</ul>\n<ul>\n<li>Are passionate about building the data layer that enables next-generation AI capabilities</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Deep expertise scaling PostgreSQL, MySQL, DynamoDB, or similar database systems</li>\n</ul>\n<ul>\n<li>Experience with Redis, Temporal, vector databases, or async job processing frameworks</li>\n</ul>\n<ul>\n<li>Experience building multi-cloud or hybrid cloud database solutions</li>\n</ul>\n<ul>\n<li>Knowledge of database orchestration and automation at scale</li>\n</ul>\n<ul>\n<li>Background at companies known for database excellence</li>\n</ul>\n<p>Note: Prior AI/ML infrastructure experience is not required. We value deep infrastructure/databases expertise from any domain.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d4ebd626-2bf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5151069008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$320,000-$485,000 USD","x-skills-required":["database architecture","OLTP systems","distributed database systems","database scaling","database performance optimization"],"x-skills-preferred":["PostgreSQL","MySQL","DynamoDB","Redis","Temporal","vector databases","async job processing frameworks"],"datePosted":"2026-04-18T15:53:43.551Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database architecture, OLTP systems, distributed database systems, database scaling, database performance optimization, PostgreSQL, MySQL, DynamoDB, Redis, Temporal, vector databases, async job processing frameworks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d6421dea-6e3"},"title":"Strategic Hunter Account Executive - Lakebase","description":"<p>We are seeking a Strategic Hunter Account Executive to help customers modernize their operational data foundation with Databricks Lakebase, our fully-managed Postgres offering for intelligent applications.</p>\n<p>This high-impact role sits within the Lakebase Go-To-Market team and partners closely with regional Account Executives to drive adoption of Lakebase with platform, application, and data teams.</p>\n<p>Lakebase gives customers a unified, governed foundation for operational workloads and AI-native applications, helping them move away from a fragmented estate of point databases toward a modern, scalable, serverless Postgres service.</p>\n<p>If you want to be at the forefront of operational databases for AI and intelligent applications at one of the fastest-growing data and AI companies in the world, this is your opportunity.</p>\n<p><strong>The impact you will have</strong></p>\n<ul>\n<li>Drive new Lakebase revenue by identifying, qualifying, and closing Lakebase opportunities within a defined territory, in partnership with regional Account Executives and the broader account team.</li>\n</ul>\n<ul>\n<li>Lead with outcomes for key Lakebase personas , including platform teams and developers, data teams, and central IT , articulating how Lakebase helps them ship features faster, simplify operational data architectures, and improve governance and cost efficiency.</li>\n</ul>\n<ul>\n<li>Sell the value of fully-managed Postgres for intelligent applications, positioning Lakebase as the optimal choice for operational workloads that power real-time, AI-driven experiences.</li>\n</ul>\n<ul>\n<li>Run complex, multi-threaded sales cycles from discovery and value hypothesis through commercial negotiation and close, navigating executive, technical, and line-of-business stakeholders.</li>\n</ul>\n<ul>\n<li>Orchestrate proof-of-value and POCs that validate Lakebase’s benefits for OLTP-style workloads, reverse ETL, and AI/ML-driven applications, in partnership with solution architects and specialists.</li>\n</ul>\n<ul>\n<li>Compete and win against legacy and cloud-native operational databases by leveraging our compete assets, benchmarks, and customer references.</li>\n</ul>\n<ul>\n<li>Align to measurable business outcomes such as performance, developer productivity, time-to-market for new features, cost reduction, and simplification of the operational data landscape.</li>\n</ul>\n<ul>\n<li>Partner cross-functionally with Product Management, Marketing, Customer Success, and Partner teams to shape territory plans, launch plays, and co-selling motions with key ISVs and GSIs.</li>\n</ul>\n<ul>\n<li>Enable the field by sharing Lakebase best practices, success stories, and sales motions with broader sales teams, helping scale Lakebase proficiency across the organization.</li>\n</ul>\n<p><strong>What success looks like in this role</strong></p>\n<p>This role requires the ability to operate across two key motions simultaneously:</p>\n<ul>\n<li>Establish top strategic focus accounts by engaging application development teams to create net-new intelligent applications leveraging Lakebase.</li>\n</ul>\n<ul>\n<li>Drive longer-term Postgres standardization and migration within Databricks&#39; most strategic accounts.</li>\n</ul>\n<p>Candidates should demonstrate how they can act as a force multiplier across multiple dimensions of the business.</p>\n<p>Success in this role requires strength in four areas:</p>\n<ul>\n<li>Business ownership – Operate at a business-unit level by tracking revenue, pipeline, and key observations, and by identifying areas needing additional focus or support.</li>\n</ul>\n<ul>\n<li>Strategic account engagement – Partner with account teams to engage priority accounts across the global DB700, driving strategic opportunities from initial engagement through successful outcomes.</li>\n</ul>\n<ul>\n<li>Field enablement – Build and execute enablement plans that empower AEs and SAs to confidently carry the Lakebase conversation even when the specialist is not present.</li>\n</ul>\n<p>Market voice and thought leadership – Develop an internal and external presence by contributing to global AMAs and internal forums, and by representing Databricks at key first- and third-party events.</p>\n<p><strong>What we look for</strong></p>\n<ul>\n<li>7+ years of enterprise SaaS sales experience, consistently exceeding quota in complex, multi-stakeholder deals.</li>\n</ul>\n<ul>\n<li>Proven success selling data platforms, operational databases (e.g., Postgres, MySQL, cloud-native DBaaS), or adjacent data/AI infrastructure to technical buyers and business leaders.</li>\n</ul>\n<ul>\n<li>Strong understanding of modern data and application architectures, including cloud-native services, microservices, event-driven systems, and how operational data underpins AI and analytics strategies.</li>\n</ul>\n<ul>\n<li>Ability to sell to both technical stakeholders (developers, architects, data engineers) and business stakeholders (product leaders, operations, line-of-business owners).</li>\n</ul>\n<ul>\n<li>Demonstrated experience leading specialist or overlay motions, working jointly with core Account Executives to create and progress opportunities.</li>\n</ul>\n<ul>\n<li>Executive presence with the ability to whiteboard architectures, lead C-level conversations, and build trust with senior decision makers.</li>\n</ul>\n<ul>\n<li>Strong value selling skills: adept at discovering pain, building a business case, and tying technical capabilities to clear, quantified outcomes.</li>\n</ul>\n<ul>\n<li>Excellent communication, storytelling, and negotiation skills, with comfort presenting to both large and small audiences.</li>\n</ul>\n<ul>\n<li>Bachelor’s degree or equivalent practical experience.</li>\n</ul>\n<p><strong>Preferred qualifications</strong></p>\n<ul>\n<li>Experience selling Postgres, operational databases, OLTP workloads, or transactional cloud database services, ideally within large or strategic accounts.</li>\n</ul>\n<ul>\n<li>Familiarity with data platforms, lakehouse architectures, and cloud ecosystems (AWS, Azure, GCP), including how operational databases fit within broader data and AI strategies.</li>\n</ul>\n<ul>\n<li>Understanding of reverse ETL, real-time decisioning, and operational analytics use cases, and how they drive value for customer-facing and internal applications.</li>\n</ul>\n<ul>\n<li>Exposure to AI-native and agent-driven applications that depend on low-latency, highly scalable operational data services.</li>\n</ul>\n<ul>\n<li>Prior experience in a high-growth, category-creating environment, helping shape new plays, messaging, and customer narratives.</li>\n</ul>\n<ul>\n<li>Experience collaborating with partners and ISVs to drive joint pipeline and co-sell motions.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p><strong>Our Commitment to Diversity and Inclusion</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d6421dea-6e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8477547002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data platforms","operational databases","Postgres","MySQL","cloud-native DBaaS","data/AI infrastructure","technical buyers","business leaders","modern data and application architectures","cloud-native services","microservices","event-driven systems","AI and analytics strategies","technical stakeholders","business stakeholders","value selling skills","discovering pain","building a business case","quantified outcomes","communication","storytelling","negotiation skills"],"x-skills-preferred":["OLTP workloads","transactional cloud database services","lakehouse architectures","cloud ecosystems","reverse ETL","real-time decisioning","operational analytics use cases","AI-native applications","agent-driven applications","high-growth environments","category-creating environments","partner collaborations","ISV collaborations"],"datePosted":"2026-04-18T15:52:47.849Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India; Mumbai, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data platforms, operational databases, Postgres, MySQL, cloud-native DBaaS, data/AI infrastructure, technical buyers, business leaders, modern data and application architectures, cloud-native services, microservices, event-driven systems, AI and analytics strategies, technical stakeholders, business stakeholders, value selling skills, discovering pain, building a business case, quantified outcomes, communication, storytelling, negotiation skills, OLTP workloads, transactional cloud database services, lakehouse architectures, cloud ecosystems, reverse ETL, real-time decisioning, operational analytics use cases, AI-native applications, agent-driven applications, high-growth environments, category-creating environments, partner collaborations, ISV collaborations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ccb9d120-ebb"},"title":"Staff Software Engineer - Ingestion","description":"<p>We are looking for a Staff Software Engineer to join our Lakeflow Connect team. As a key member of the team, you will be responsible for designing and implementing the ingestion capabilities of the Lakehouse. You will work closely with other products to embed Connect into various surfaces in Databricks.</p>\n<p>The successful candidate will have experience in core database internals and be able to extract data from OLTP systems while imposing minimal load on production systems. They will also be able to build systems that use techniques such as incremental data capture and log parsing.</p>\n<p>Key responsibilities:</p>\n<ul>\n<li>Design and implement the ingestion capabilities of the Lakehouse</li>\n<li>Work closely with other products to embed Connect into various surfaces in Databricks</li>\n<li>Extract data from OLTP systems while imposing minimal load on production systems</li>\n<li>Build systems that use techniques such as incremental data capture and log parsing</li>\n<li>Collaborate with cross-functional teams to ensure seamless integration of Connect with other Databricks products</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>15+ years of industry experience building and supporting large-scale distributed systems</li>\n<li>Experience in areas like database replication, backup, and transaction recovery</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Strong foundation in algorithms and data structures and their real-world use cases</li>\n<li>Experience driving company initiatives towards customer satisfaction</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Comprehensive benefits and perks that meet the needs of all employees</li>\n<li>Opportunities for professional growth and development</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Recognition and rewards for outstanding performance</li>\n</ul>\n<p>At Databricks, we strive to provide a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ccb9d120-ebb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8201686002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["database internals","OLTP systems","incremental data capture","log parsing","large-scale distributed systems","database replication","backup","transaction recovery"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:20.662Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database internals, OLTP systems, incremental data capture, log parsing, large-scale distributed systems, database replication, backup, transaction recovery"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b054d891-685"},"title":"Staff+ Software Engineer, Databases","description":"<p>We&#39;re looking for experienced engineers to build and scale the database infrastructure that powers both Claude&#39;s product offerings and Anthropic&#39;s research initiatives.</p>\n<p>As a Software Engineer on the Databases team, you will architect and operate database systems that both enable millions of users to interact with Claude and support cutting-edge AI research.</p>\n<p>This is a unique opportunity to tackle database challenges at unprecedented scale. You&#39;ll develop the database strategy for Anthropic, design systems that handle billions of API requests, create storage solutions that work seamlessly across GCP, AWS, and diverse deployment models, and build the reliable data layer that accelerates research experimentation.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Drive the technical direction for database solutions used across Product and Research</li>\n<li>Design and implement database solutions that scale to support millions of users across Claude&#39;s product ecosystem</li>\n<li>Build and scale database systems through 100x+ growth while maintaining reliability and performance</li>\n<li>Architect data storage solutions that work seamlessly across GCP, AWS, first-party deployments, third-party deployments, and other environments</li>\n<li>Develop database infrastructure that serves both product and research workloads with different performance characteristics</li>\n<li>Partner with product and research teams to understand data requirements and build infrastructure that accelerates innovation</li>\n<li>Optimize database performance, reliability, and cost efficiency at massive scale</li>\n<li>Make critical build vs. buy decisions for database technologies</li>\n</ul>\n<p>You might be a good fit if you:</p>\n<ul>\n<li>Have 10+ years of experience in a Software Engineer role, building and scaling database systems</li>\n<li>Have 3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead</li>\n<li>Possess deep expertise in distributed database architectures and OLTP systems at scale</li>\n<li>Have successfully scaled databases through massive growth at high-growth companies</li>\n<li>Can balance the speed of a startup environment with the reliability needs of production systems</li>\n<li>Excel at technical leadership and cross-functional collaboration</li>\n<li>Are passionate about building the data layer that enables next-generation AI capabilities</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Deep expertise scaling PostgreSQL, MySQL, DynamoDB, or similar database systems</li>\n<li>Experience with Redis, Temporal, vector databases, or async job processing frameworks</li>\n<li>Experience building multi-cloud or hybrid cloud database solutions</li>\n<li>Knowledge of database orchestration and automation at scale</li>\n<li>Background at companies known for database excellence</li>\n</ul>\n<p>Note: Prior AI/ML infrastructure experience is not required. We value deep infrastructure/databases expertise from any domain.</p>\n<p>We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b054d891-685","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5151069008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$320,000-$485,000 USD","x-skills-required":["Database architecture","Distributed database systems","OLTP systems","Database scaling","Database performance optimization","Database reliability","Database cost efficiency"],"x-skills-preferred":["PostgreSQL","MySQL","DynamoDB","Redis","Temporal","Vector databases","Async job processing frameworks","Multi-cloud database solutions","Hybrid cloud database solutions","Database orchestration","Database automation"],"datePosted":"2026-04-18T15:50:18.715Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Database architecture, Distributed database systems, OLTP systems, Database scaling, Database performance optimization, Database reliability, Database cost efficiency, PostgreSQL, MySQL, DynamoDB, Redis, Temporal, Vector databases, Async job processing frameworks, Multi-cloud database solutions, Hybrid cloud database solutions, Database orchestration, Database automation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fe0d53c0-05e"},"title":"Delivery Solutions Architect","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Lakehouse platform. As a Delivery Solutions Architect (DSA), you will play a critical role during this journey. The DSA works across a small number of our largest or highest potential key accounts, collaborating across Databricks teams to accelerate the adoption and growth of the Databricks platform.</p>\n<p>As a DSA, you will help ensure customer success by driving focus and technical accountability to our most complex customers who need guidance to accelerate consumption on Databricks workloads that they have already selected. This is a hybrid technical and commercial role. It is commercial in the sense that you will be required to own and drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, owning executive relationships and creating and driving plans and strategies for Databricks colleagues to execute upon.</p>\n<p>This is in parallel to being technical, with expectations being that you become at least Level 200 across all Databricks products/workloads and that you become the Use Case-specific technical lead post Technical Win. You will bring strong executive relationship management skills and high levels of technical credibility to effectively engage and communicate at all levels with an organization, in particular with a track record of building strong relationships with the customers&#39; executives and C-suite, elevating the conversation, and helping them realize the value of Databricks.</p>\n<p>You will report directly to a Director, Field Engineering, as part of your Business Unit&#39;s Technical GM organization. You will play a key role in establishing the fundamental assets and best practices within the DSA team, mentoring other DSAs and wider account team members within your region, helping them develop personally, professionally and to further their careers.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Engage with the Solutions Architect to understand the full Use Case Demand Plan for prioritized customers.</li>\n<li>Own the Post-Technical Win technical account strategy and investment plan for the majority of Databricks Use Cases within our most strategic accounts.</li>\n<li>Be the accountable technical leader assigned to specific Use Cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty/ambiguity and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks.</li>\n<li>Be the first point of contact for any technical issues or questions related to production/go live status of agreed upon Use Cases within an account, oftentimes services multiple use cases within the largest and most complex organizations.</li>\n<li>Leverage both Shared Services of User Education, Onboarding/Technical Services and Support resources, along with escalating to Level 400/500 technical experts (Specialist Solution Architects and Product Specialists) to execute on the right tasks that are beyond your scope of activities or expertise.</li>\n<li>Create, own and execute a PoV as to how key use cases can be accelerated into production, bringing EM/PM in to prepare Professional Services proposals.</li>\n<li>Navigate Databricks Product and Engineering teams for New Product Innovations, Private Previews and Upgrade needs (DBR, E2 and Unity Catalog).</li>\n<li>Build and maintain an executive level as well as a detailed programme level success plan that covers all activities of Customer, PS, Partner, SSA, Product Specialist, SA to cover the below workstreams:</li>\n</ul>\n<ul>\n<li>Key use cases moving from &#39;win&#39; to production</li>\n<li>Enablement / user growth plan</li>\n<li>Product adoption (strategy and activities to increase adoption of LH vision)</li>\n<li>Organic needs for current investment Eg. Cloud Cost control, Tuning &amp; Optimization</li>\n<li>Executive and operational governance</li>\n<li>Proactively provide internal and external updates</li>\n<li>KPI reporting on the status of consumption and customer health, covering investment status, key risks, product adoption and use case progression to your Technical GM</li>\n<li>Development of reusable and scalable assets and mentorship of junior team members to establish the DSA team</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fe0d53c0-05e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8482406002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Engineering technologies (e.g. Spark, Hadoop, Kafka)","Data Warehousing (e.g. SQL, OLTP/OLAP/DSS)","Data Science and Machine Learning technologies (e.g. pandas, scikit-learn, HPO)","Executive disciplinary management","Influencing and leading teams","Strategic Management Consulting","Building and steering to a value case","Quota ownership, achievement and track record of great performance against objective target","Proficient in both Korean and English (Native level Korean and Business level English) verbally and in writing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:45.267Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seoul, South Korea"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Engineering technologies (e.g. Spark, Hadoop, Kafka), Data Warehousing (e.g. SQL, OLTP/OLAP/DSS), Data Science and Machine Learning technologies (e.g. pandas, scikit-learn, HPO), Executive disciplinary management, Influencing and leading teams, Strategic Management Consulting, Building and steering to a value case, Quota ownership, achievement and track record of great performance against objective target, Proficient in both Korean and English (Native level Korean and Business level English) verbally and in writing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e503559e-cf7"},"title":"Senior Machine Learning Engineer","description":"<p><strong>Job Title: Senior Machine Learning Engineer</strong></p>\n<p><strong>Job Description:</strong></p>\n<p>Before 1965, it was extremely difficult and time-consuming to analyze complicated signals, like radio or images. You could solve it, but you had to throw a ton of compute at it. That all changed with the invention of the Fast Fourier transform, which could efficiently break that signal down into the frequencies that are a part of it.</p>\n<p>The Risk Onboarding team is working on efficiently reviewing customers’ applications without compromising on quality. We are the front line of defense for preventing money laundering and financial crimes, building systems to verify that someone is who they say they are and that we are allowed to do business with them.</p>\n<p><strong>About Us:</strong></p>\n<p>At Mercury, we craft an exceptional banking experience for startups. Our team is focused on ensuring our products create a safe environment that meets the needs of our customers, administrators, and regulators.</p>\n<p><strong>Job Responsibilities:</strong></p>\n<p>As part of this role, you will:</p>\n<ul>\n<li>Partner with data science &amp; engineering teams to design and deploy ML &amp; Gen AI microservices, primarily focusing on automating reviews</li>\n<li>Work with a full-stack engineering team to embed these services into the overall review experience, including human in the loop, escalations, and feeding human decisions back into the service</li>\n<li>Implement testing, observability, alerting, and disaster recovery for all services</li>\n<li>Implement tracing, performance, and regression testing</li>\n<li>Feel a strong sense of product ownership and actively seek responsibility – we often self-organize on small/medium projects, and we want someone who’s excited to help shape and build Mercury’s future</li>\n</ul>\n<p><strong>Ideal Candidate:</strong></p>\n<p>The ideal candidate for the role has:</p>\n<ul>\n<li>7+ years of experience in roles like machine learning engineering, data engineering, backend software engineering, and/or devops</li>\n<li>Expertise with:</li>\n</ul>\n<ul>\n<li>A full modern data stack: Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow</li>\n<li>SQL, dbt, Python</li>\n<li>OLAP / OLTP data modelling and architecture</li>\n<li>Key-value stores: Redis, dynamoDB, or equivalent</li>\n<li>Streaming / real-time data pipelines: Kinesis, Kafka, Redpanda</li>\n<li>API frameworks: FastAPI, Flask, etc.</li>\n<li>Production ML Service experience</li>\n<li>Working across full-stack development environment, with experience transferable to Haskell, React, and TypeScript</li>\n</ul>\n<p><strong>Total Rewards Package:</strong></p>\n<p>The total rewards package at Mercury includes base salary, equity (stock options/RSUs), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>\n<p><strong>Salary Range:</strong></p>\n<p>Our target new hire base salary ranges for this role are the following:</p>\n<ul>\n<li>US employees (any location): $200,700 - $250,900</li>\n<li>Canadian employees (any location): CAD 189,700 - 237,100</li>\n</ul>\n<p><strong>Diversity &amp; Belonging:</strong></p>\n<p>Mercury values diversity &amp; belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e503559e-cf7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mercury","sameAs":"https://www.mercury.com/","logo":"https://logos.yubhub.co/mercury.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mercury/jobs/5639559004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200,700 - $250,900 (US) | CAD 189,700 - 237,100 (Canada)","x-skills-required":["Snowflake","dbt","Fivetran","Airbyte","Dagster","Airflow","SQL","Python","OLAP / OLTP data modelling and architecture","Redis","dynamoDB","Kinesis","Kafka","Redpanda","FastAPI","Flask","Production ML Service experience","Haskell","React","TypeScript"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:45:16.566Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow, SQL, Python, OLAP / OLTP data modelling and architecture, Redis, dynamoDB, Kinesis, Kafka, Redpanda, FastAPI, Flask, Production ML Service experience, Haskell, React, TypeScript","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189700,"maxValue":250900,"unitText":"YEAR"}}}]}