<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>9bb1344c-662</externalid>
      <Title>Sr. Solutions Engineer, Retail - CPG</Title>
      <Description><![CDATA[<p>We are looking for a Senior Solutions Engineer to join our team. As a Senior Solutions Engineer, you will work with large enterprises in the Retail and CPG space to help them become more data-driven. You will define and direct the technical strategy for our largest and most important accounts, leading to more widespread use of our products and wider and deeper adoption of ML &amp; AI.</p>
<p>You will work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives. You will also work with a team of engineers to build proofs of concept and demonstrate our products.</p>
<p>The ideal candidate will have a strong background in value selling, technical account management, and technical leadership. They will also have a solid understanding of big data, data science, and cloud technologies.</p>
<p>Responsibilities:</p>
<ul>
<li>Define and direct the technical strategy for our largest and most important accounts</li>
<li>Work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives</li>
<li>Collaborate with a team of engineers to build proofs of concept and demonstrate our products</li>
<li>Provide technical guidance and support to customers</li>
<li>Work with customers to identify and address technical issues</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience working with large enterprises in the Retail and CPG space</li>
<li>3+ years of experience in a pre-sales capacity or supporting sales activity</li>
<li>Strong background in value selling, technical account management, and technical leadership</li>
<li>Solid understanding of big data, data science, and cloud technologies</li>
<li>Experience with design and implementation of big data technologies such as Hadoop, NoSQL, MPP, OLTP, and OLAP</li>
<li>Production programming experience in Python, R, Scala, or Java</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>big data, data science, cloud technologies, Hadoop, NoSQL, MPP, OLTP, OLAP, Python, R, Scala, Java, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It has over 10,000 organisations worldwide as customers.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7507778002</Applyto>
      <Location>Illinois</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>02ba8342-079</externalid>
      <Title>Specialist Solutions Architect - Data Warehousing (Healthcare &amp; Life Sciences)</Title>
      <Description><![CDATA[<p>As a Specialist Solutions Architect (SSA) - Data Warehousing, you will guide customers in their cloud data warehousing transformation with Databricks. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with large-scale data warehousing technologies and lakehouse architecture.</p>
<p>The SSA helps customers through evaluations and successful production planning for their business intelligence workloads while aligning their technical roadmap for the Databricks Data Intelligence Platform.</p>
<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in the data warehousing specialty - including performance tuning, data modeling, winning evaluations, architecture design, and production migration planning.</p>
<p>The impact you will have:</p>
<ul>
<li>Provide technical leadership to guide strategic customers to successful cloud transformations on large-scale data warehousing workloads - ranging from evaluation to architecture design to production deployment</li>
<li>Prove the value of the Databricks Intelligence Platform for customer workloads by architecting production workloads, including end-to-end pipeline load performance testing and optimization</li>
<li>Become a technical expert in an area such as data warehousing evaluations or helping set up successful workload migrations</li>
<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing and performance, and tuning workloads for production</li>
<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>
<li>Contribute to the Databricks Community</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years experience in a technical role with expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, data governance, debugging MPP data warehouses or other big data solutions, or migration workloads from EDW other systems</li>
<li>Experience with design and implementation of data warehousing technologies including relational databases, SQL, data analytics, NoSQL, MPP, OLTP, and OLAP</li>
<li>Deep Specialty Expertise in at least one of the following areas:</li>
</ul>
<p>+ Experience scaling large analytical data workloads in the cloud that are performant and cost-effective 	+ Maintained, extended, or migrated a production data warehouse system to evolve with complex needs, including data modeling, data governance needs, and integration with business intelligence tools 	+ Experience migrating on-premise EDW workloads to the public cloud</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Production programming experience in SQL and Python, Scala, or Java</li>
<li>Experience with the AWS, Azure, or GCP clouds</li>
<li>2 years professional experience with data warehousing and big data technologies (Ex: SQL, Redshift, SAP, Synapse, EMR, OLAP &amp; OLTP workloads)</li>
<li>2 years customer-facing experience in a pre-sales or post-sales role</li>
<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>
<li>Can travel up to 30% when needed</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>data warehousing, cloud data warehousing, Databricks, lakehouse architecture, SQL, Python, Scala, Java, AWS, Azure, GCP, data analytics, NoSQL, MPP, OLTP, OLAP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8337429002</Applyto>
      <Location>Northeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a84988d7-61a</externalid>
      <Title>Partner Solutions Architect, CEMEA</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect, you will engage with our top Consulting and System Integrator (C&amp;SI) Partners and Field Engineering to drive adoption of the Data Intelligence Platform in our top customers through C-Suite Technical Executive alignment, engagement of Champions, and collaboration with our sales teams in the region.</p>
<p>You will develop ongoing partner capability via the &#39;Technical Champions program&#39; within our top C&amp;SI Partners to support CoE creation and delivery excellence.</p>
<p>You will provide strategic vision related to the Databricks Data Intelligence Platform aligning to the GSI engagement in our top accounts and develop and support programs to promote Partner expertise in the application of the Databricks Data Intelligence Platform.</p>
<p>Reporting to the Director, Field Engineering (Partner Solutions Architect)</p>
<p>The impact you will have:</p>
<p>A Partner Solutions Architect plays a crucial role in the success of the Databricks partner ecosystem by ensuring partners have the technical knowledge and experience to build, maintain and grow successful solutions for their customers.</p>
<p>This, in turn, drives the adoption and success of the Databricks Data Intelligence Platform and the Partner solutions in the market.</p>
<p>To achieve this, the PSA will:</p>
<ul>
<li>Accelerate Partner pre-sales and delivery in joint, strategic customer accounts by aligning Partner and Databricks Resources and providing technical expertise to accelerate adoption and consumption of the platform</li>
</ul>
<ul>
<li>Work closely with Databricks account teams to help our partner ecosystem scope, evaluate and deliver large scale data projects and transformational programmes</li>
</ul>
<ul>
<li>Grow the Partner Databricks delivery capability by providing technical expertise to help design, build and maintain repeatable solutions using Databricks Products and Services</li>
</ul>
<ul>
<li>Develop, maintain and grow Senior Technical Executive relationships to identify new business opportunities, innovative use cases, competition and support joint go to market initiatives.</li>
</ul>
<ul>
<li>The role requires up to 40% travel to GSI Partner sites and the Databricks German offices.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience hands-on as a Data Professional in a modern cloud-based data stack</li>
</ul>
<ul>
<li>At least 3 years experience with technical pre-sales and sales methodologies within a consumption business model</li>
</ul>
<ul>
<li>Collaborate closely with partner organisations at the senior executive level to understand their needs and objectives and align to Databricks products and Services</li>
</ul>
<ul>
<li>Conduct training sessions, workshops, and webinars to educate partners on new technologies, features, and best practices.</li>
</ul>
<ul>
<li>Develop and maintain technical content such as whitepapers, case studies, and solution guides to assist partners in leveraging the Databricks offerings</li>
</ul>
<ul>
<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala)</li>
</ul>
<ul>
<li>Designing, implementing and maintaining end to end Data Architectures for Big Data, Data Warehousing and AI on MPP based platforms</li>
</ul>
<ul>
<li>Managing multiple, frequently changing priorities across multiple teams</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Intelligence Platform, Cloud-based data stack, Technical pre-sales and sales methodologies, Partner organisations, Senior executive level, New technologies, Features, Best practices, Core programming language, Python, Java, Scala, Data Architectures, Big Data, Data Warehousing, AI, MPP based platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8413308002</Applyto>
      <Location>Switzerland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7b750523-8ff</externalid>
      <Title>Staff Software Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to lead the technical strategy and implementation of our enterprise data architecture, governance foundations, and analytics enablement tooling.</p>
<p>In this role, you will be the primary engineering counterpart to the Senior Product Manager for Data Enablement &amp; Governance, jointly shaping the roadmap for enterprise analytics, shared definitions, and the tools that help Omada answer questions faster and more reliably.</p>
<p>You will design and evolve core data products, define patterns and standards used across the company, and drive the technical execution of initiatives that ensure our metrics, reports, and data products are scalable, governed, and trustworthy.</p>
<p>This is a high-impact, cross-functional Staff role working across Data Engineering, Data Science, Analytics, Product, IT, and business leaders.</p>
<p><strong>Key Responsibilities:</strong></p>
<p><strong>Enterprise Data Architecture</strong></p>
<ul>
<li>Own the vision and technical roadmap for Omada&#39;s enterprise data architecture, spanning ingestion, storage, modeling, and serving layers for analytics and applied statistics use cases.</li>
<li>Design, implement, and evolve scalable, secure, and cost-efficient data solutions (datalakes, warehouses, marts, semantic layers) that support governed, cross-functional analytics and self-service.</li>
<li>Define and socialize architectural patterns, data contracts, and integration standards used by data and product teams across the organization.</li>
<li>Anticipate future needs (e.g., new product lines, new modalities, AI/ML workloads) and drive proactive architectural changes rather than reacting to incidents or point-in-time requests.</li>
</ul>
<p><strong>Data Modeling, Quality, and Governance Foundations</strong></p>
<ul>
<li>Lead the design of logical and physical data models to support enterprise metrics, dashboards, and ad hoc analytics, with a focus on reusability and clear ownership.</li>
<li>Implement robust data quality, validation, and monitoring frameworks that underpin trusted “single source of truth” definitions for core concepts (e.g., active member, MAU, GLP-1 member).</li>
<li>Partner with the Senior Product Manager, Data Enablement &amp; Governance to translate governance decisions (definitions, ownership, change-management processes) into concrete technical implementations in the data platform.</li>
<li>Set standards and review mechanisms to ensure new pipelines, marts, and reports align with enterprise definitions and governance policies.</li>
<li>Continuously improve performance, scalability, and cost-efficiency of data workflows and storage; lead deep dives and remediation for complex production issues.</li>
</ul>
<p><strong>Enterprise Data Products Lifecycle</strong></p>
<ul>
<li>In close partnership with the Senior PM, define and deliver core, reusable data products (e.g., engagement, clinical, financial, client, care delivery datasets) that power dashboards, reporting, and self-service analytics.</li>
<li>Co-Architect and implement technical foundations for AI-assisted analytics tools, governed semantic layers, and reporting applications that make analysts and business users more efficient.</li>
<li>Partner with Product and Engineering teams owning tools like Amplitude, Tableau, and internal reporting tools to ensure consistent instrumentation, mapping to enterprise definitions, and scalable access patterns.</li>
<li>Translate business and product requirements into resilient schemas, data services, and interfaces that are usable, maintainable, and auditable.</li>
<li>Ensure production data delivery meets defined SLAs and supports downstream BI, reporting apps, and applied statistics workloads.</li>
<li>Play a key role in cross-functional forums (e.g., Data Governance Committee, analytics communities) as the technical voice for feasibility, risk, and long-term platform health.</li>
</ul>
<p><strong>Technical Leadership, Mentorship, and Culture</strong></p>
<ul>
<li>Lead large, multi-team technical initiatives,from design to implementation and rollout,setting a high bar for design docs, reviews, and execution quality.</li>
<li>Mentor senior and mid-level engineers, elevating the team’s skills in data modeling, pipeline design, governance, and platform thinking.</li>
<li>Help shape playbooks for how product squads and spokes engage with central data teams on new metrics, data products, and applied stats projects.</li>
<li>Partner closely with Analytics, Data Science, Product, and business leaders to ensure data architecture and governance decisions are aligned with company OKRs and measurable business value.</li>
<li>Proactively identify complexity, duplication, and fragility in existing systems; drive simplification and standardization with sustainable solutions.</li>
<li>Model Omada’s values in day-to-day work, fostering a culture of trust, context-seeking, bold thinking, and high-impact delivery.</li>
</ul>
<p><strong>About You:</strong></p>
<ul>
<li>8+ years of experience building, maintaining, and orchestrating scalable data platforms and high-quality production pipelines, including significant experience in analytics or warehousing environments.</li>
<li>Demonstrated Staff-level impact: leading cross-team technical initiatives, making architectural decisions that shaped a multi-year roadmap, and influencing stakeholders beyond your immediate team.</li>
<li>Deep experience with cloud data ecosystems (e.g., AWS) and modern data warehouses (e.g., Redshift, Snowflake, BigQuery), including MPP query optimization.</li>
<li>Strong background in data modeling for OLTP and OLAP, and designing reusable data products for BI, reporting, and advanced analytics.</li>
<li>Hands-on experience implementing data quality, observability, and governance frameworks, ideally in a regulated or PHI/PII-sensitive environment.</li>
<li>Experience partnering with Product Management and Analytics to define and deliver platform capabilities, not just point solutions.</li>
</ul>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Strong proficiency in SQL (analytical and performance-tuned) and experience with relational and MPP databases.</li>
<li>Proficiency in at least one modern programming language used in data engineering (e.g., Python, Java, Scala) and comfort applying software engineering best practices (testing, CI/CD, code review).</li>
<li>Experience with workflow orchestration and data integration tools (e.g., Airflow) and event-driven or streaming patterns where appropriate.</li>
<li>Familiarity with BI and analytics tools (e.g., Tableau, Amplitude, or similar) and how they integrate with governed data layers.</li>
<li>Experience with data governance concepts (ownership, lineage, definitions, access controls) and their technical implementation in a modern data stack.</li>
<li>Familiarity with AI tools for development.</li>
</ul>
<p><strong>Communication &amp; Working Style:</strong></p>
<ul>
<li>Excellent communication and collaboration skills, with the ability to convey complex technical concepts to non-technical stakeholders.</li>
<li>Highly self-directed and comfortable operating in ambiguous, cross-functional problem spaces, creating clarity and direction where none exists.</li>
<li>Strong sense of ownership and bias for impact; you care about outcomes for members, customers, and internal users, not just elegant systems.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive salary with generous annual cash bonus</li>
<li>Equity grants</li>
<li>Remote first work from home culture</li>
<li>Flexible Time Off to help you recharge</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Cloud data ecosystems, Modern data warehouses, MPP query optimization, Data modeling, Data quality, Data governance, Workflow orchestration, Data integration, Event-driven or streaming patterns, BI and analytics tools, AI tools for development</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a healthcare technology company that provides digital therapeutics for chronic disease management.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7753330</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5d911052-764</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re hiring a Senior Data Engineer to work on our Data Lake Team. As a key member of the team, you will be responsible for building and operating various data platform components, including data quality, data pipelines, infrastructure, and monitoring.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Maintain data pipeline job framework</li>
<li>Develop Data Quality framework ( internal set of tools for internal and external data sources validation )</li>
<li>Maintain and develop public facing data ingestion service with 17 000+ RPS.</li>
<li>Maintain and develop core data pipelines in batch and streaming manners.</li>
<li>Be a last line of support for our internal platform users.</li>
<li>Take a part in an on-call rotation for data platform incidents (shared across the team).</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Fluent English</li>
<li>4+ years building production services and data pipelines (batch and/or streaming)</li>
<li>Strong experience with Python or the readiness to ramp up quickly.</li>
<li>Hands-on experience with at least one MPP system (Spark, Trino, Redshift etc.)</li>
<li>Hands-on experience operating services in a cloud environment (AWS preferred)</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Terraform/CloudFormation or other IaC tools</li>
<li>ClickHouse or similar analytical databases</li>
<li>Experiences with data quality/observability tools</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time - we strongly encourage all employees to take at least 3 weeks per year</li>
<li>Fully remote team - choose where you live</li>
<li>Work from home stipend - we want you to have the resources you need to set up your home office</li>
<li>Apple laptops provided for new employees</li>
<li>Training and development budget - refreshed each year for every employee</li>
<li>Maternity &amp; Paternity leave for qualified employees</li>
<li>Work with smart people who will help you grow and make a meaningful impact</li>
<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>
<li>Stock options - offered in addition to the base salary</li>
<li>Regular team offsites to connect and collaborate</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120k USD</Salaryrange>
      <Skills>Python, MPP system, AWS, Terraform, ClickHouse, data quality/observability tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has been in the market since 2019, building a search and discovery platform for ecommerce.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/FF201D8AA3</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
  </jobs>
</source>