<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>bd829e13-6ce</externalid>
      <Title>Member of Technical Staff - Data Infrastructure Manager</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate leaders to help us tackle the most interesting and challenging AI questions of our time. Our vision is bold and broad, to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all, consumers, businesses, developers, so that everyone can realize its benefits.</p>
<p>We’re looking for a Data Infrastructure Manager to lead a team of talented engineers building and scaling the data infrastructure that powers Microsoft’s consumer AI. This role sits at the intersection of technical leadership and people management. You’ll set the technical direction for large-scale data and ML pipelines, AI agentic workflows, and intelligent systems while growing a high-performing team of ICs.</p>
<p>If you’ve architected big data platforms from the ground up and are now ready to multiply your impact through others, including on some of the most exciting AI infrastructure challenges in the industry, we want to hear from you.</p>
<p>Deep technical expertise in big data and distributed systems A track record of leading and developing engineering talent A passion for automation, observability, and operational excellence The ability to translate complex technical strategy into clear, executable plans Empathy, collaboration, and a growth mindset</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of Respect, Integrity, and Accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Team Leadership &amp; People Development Hire, mentor, and develop a team of Data Infrastructure Engineers, fostering a culture of technical excellence, ownership, and continuous growth. Conduct regular 1:1s, set clear goals, and provide actionable feedback to support each engineer’s career development. Build and sustain an inclusive, collaborative team environment aligned with Microsoft’s values of Respect, Integrity, Accountability, and Inclusion.</p>
<p>Technical Strategy &amp; Architecture Define and drive the technical vision for a scalable, reliable, and observable Big Data Infrastructure serving mission-critical AI applications, including agentic and intelligent systems. Lead technical design reviews, establish engineering standards, and ensure a clean, secure, and well-documented codebase. Partner with engineers to architect data solutions across storage, compute, and analytics layers, including the pipelines and orchestration frameworks that underpin AI agent workflows, balancing long-term scalability with near-term delivery.</p>
<p>Platform &amp; Operations Champion DevOps and SRE best practices across the team, including automated deployments, service monitoring, and incident response. Guide the team in building a self-service big data platform that empowers data engineers, researchers, and partner teams. Oversee robust CI/CD pipelines and infrastructure-as-code practices using tools like Bicep, Terraform, and ARM. Lead capacity planning and drive proactive resolution of bottlenecks in data pipelines and infrastructure.</p>
<p>Cross-Functional Collaboration Act as a key technical partner to Data Engineers, Data Scientists, AI Researchers, ML Engineers, and Developers to deliver secure, seamless big data workflows. Collaborate with Security teams to uphold strong infrastructure security practices (IAM, OAuth, Kerberos). Represent the team in planning and prioritization discussions, translating organizational goals into actionable engineering roadmaps.</p>
<p>Qualifications Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</p>
<p>Preferred Qualifications Master’s Degree in Computer Science or related technical field AND 10+ years of technical engineering experience OR Bachelor’s Degree AND 14+ years, OR equivalent experience. 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments. 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize. Solid scripting and automation fluency in Python, Bash, or PowerShell. Proven track record managing CI/CD pipelines, release automation, and production incident response. Hands-on expertise with modern data platforms like Databricks, including deep familiarity with relational and NoSQL databases, key-value stores, Spark compute engines, distributed file systems (e.g., HDFS, ADLS Gen2), and messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP.</p>
<p>#MicrosoftAI #MAIDPS #mai-datainsights #mai-datainsights</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,000 per year</Salaryrange>
      <Skills>Big Data and Distributed Systems, Data Infrastructure, DevOps, SRE, Platform Engineering, Distributed Systems, Containerized Application Deployments, Kubernetes, Helm/Kustomize, Python, Bash, PowerShell, CI/CD Pipelines, Release Automation, Production Incident Response, Modern Data Platforms, Databricks, Relational and NoSQL Databases, Key-Value Stores, Spark Compute Engines, Distributed File Systems, Messaging Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Agentic Workflow Infrastructure, Orchestration Frameworks, Retrieval Pipelines, Multi-Agent Systems, Web Stacks, TypeScript, Node.js, React, PHP, Master’s Degree in Computer Science or related technical field, 10+ years of technical engineering experience, Bachelor’s Degree and 14+ years, Equivalent experience, 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering, 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments, 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize, Solid scripting and automation fluency in Python, Bash, or PowerShell, Proven track record managing CI/CD pipelines, release automation, and production incident response, Hands-on expertise with modern data platforms like Databricks, Proven experience with cloud-native infrastructure across Azure, AWS, or GCP, Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams, Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale, Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-infrastructure-manager-microsoft-ai-copilot-3/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>dacc9b06-4d8</externalid>
      <Title>Member of Technical Staff - Principal Data Infrastructure Engineer</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>
<p>We’re looking for a Member of Technical Staff – Principal Data Infrastructure Engineer. This role is a dynamic blend of Platform Engineering, DevOps/SRE, and Big Data Infrastructure Engineering, focused on enabling large-scale data and ML pipelines and intelligent systems. If you’ve architected big data platforms from the ground up and are eager to apply that expertise to consumer AI, we want to hear from you.</p>
<p>You’ll bring:</p>
<p>Deep technical expertise A passion for automation and observability Fluency in distributed systems Creativity to design scalable solutions And just as importantly: empathy, collaboration, and a growth mindset</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<p>Architect and maintain scalable, reliable, and observable Big Data Infrastructure for mission-critical AI applications. Champion DevOps and SRE best practices,automated deployments, service monitoring, and incident response. Build a self-service big data platform that empowers data and platform engineers and researchers. Develop robust CI/CD pipelines and automate infrastructure provisioning using Infrastructure as Code tools (Bicep, Terraform, ARM). Collaborate with Data Engineers, Data Scientists, AI Researchers, and Developers to deliver secure, seamless big data workflows. Lead technical design reviews and uphold a clean, secure, and well-documented codebase. Proactively identify and resolve bottlenecks in data pipelines and infrastructure. Optimize system performance across storage, compute, and analytics layers. Partner with Security teams to enhance system security (IAM, OAuth, Kerberos). Embody and promote Microsoft’s values: Respect, Integrity, Accountability, and Inclusion.</p>
<p>Qualifications:</p>
<p>Required Qualifications: Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.</p>
<p>Preferred Qualifications: 4+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 3+ years of hands-on experience managing and scaling distributed systems,from bare-metal to cloud-native environments. 2+ years deploying containerized applications using Kubernetes and Helm/Kustomize. Solid scripting and automation skills using Python, Bash, or PowerShell. Proven success in CI/CD pipeline management, release automation, and production troubleshooting. Experience working with Databricks for scalable data processing and analytics. Familiarity with security practices in infrastructure environments, including IAM, OAuth, and Kerberos administration. Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Hands-on expertise with modern data platforms like Databricks, including: Deep understanding of data storage and processing technologies: Relational &amp; NoSQL databases Key-value stores. Spark compute engines. Distributed file systems (e.g., HDFS, ADLS Gen2). Messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Capacity planning and incident management for large-scale big data systems. Solid collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP. Exposure to agentic workflows, deep learning, or AI frameworks. Practical experience integrating LLMs (e.g., GPT-based models) into daily workflows,automating documentation, code generation, reviews, and operational intelligence. Solid grasp of prompt engineering techniques to design, optimize, and evaluate interactions with LLMs. Demonstrated ability to troubleshoot and resolve complex performance and scalability issues across infrastructure layers. Excellent interpersonal and communication skills, with a solid passion for mentorship and continuous learning. Experience applying LLMs to DevOps workflows, enhancing incident response, and streamlining cross-functional collaboration is a solid advantage.</p>
<p>#MicrosoftAI #mai-datainsights #mai-datainsights</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>Big Data Infrastructure, DevOps, SRE, Platform Engineering, Distributed Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Databricks, CI/CD Pipelines, Infrastructure as Code, Bicep, Terraform, ARM, Python, Bash, PowerShell, Kubernetes, Helm, Kustomize, LLMs, GPT-based models, Prompt Engineering, Agentic Workflows, Deep Learning, AI Frameworks, Containerized Applications, Security Practices, IAM, OAuth, Kerberos Administration, Web Stacks, TypeScript, Node.js, React, PHP, Modern Data Platforms, Spark Compute Engines, Distributed File Systems, Messaging Systems, Capacity Planning, Incident Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-principal-data-infrastructure-engineer-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6365e7d7-511</externalid>
      <Title>Senior Forward Deployed Data Scientist/Engineer</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Senior Forward Deployed Data Scientist / Engineer to work directly with customers on ambiguous, high-impact problems at the intersection of data science, product development, and AI deployment.</p>
<p>This is not a traditional analytics role. On this team, data scientists do the core statistical and modeling work, but they also build real tools and products: evaluation explorers, operator workflows, decision-support systems, experimentation surfaces, and customer-specific AI/data applications that get used in production.</p>
<p>The right candidate is strong in first-principles problem solving, rigorous measurement, and technical execution. They know how to define metrics, design experiments, diagnose failures, and build systems that people actually use. They are also comfortable using modern AI-assisted development tools to prototype and iterate quickly without sacrificing reliability, observability, or judgment. Python and SQL matter in this role, but as execution fluency in service of building better products and making better decisions.</p>
<p>Responsibilities: Partner directly with enterprise customers to understand workflows, operational pain points, constraints, and success criteria Turn ambiguous business and product problems into measurable solutions with clear metrics, technical designs, and deployment plans Design and build internal and customer-facing data products, including evaluation tools, workflow applications, decision-support systems, and thin product layers on top of data/ML systems Build end-to-end solutions across data ingestion, transformation, experimentation, statistical modeling, deployment, monitoring, and iteration Design evaluation frameworks, benchmarks, and feedback loops for ML/LLM systems, human-in-the-loop workflows, and model-assisted operations Apply rigorous statistical thinking to experimentation, causal inference, metric design, forecasting, segmentation, diagnostics, and performance measurement Use AI-assisted development workflows to accelerate prototyping and product iteration, while maintaining strong engineering discipline Diagnose failure modes across data quality, model behavior, retrieval, workflow design, and user experience, and drive fixes into production Act as the voice of the customer to Product, Engineering, and Data Science, using field learnings to shape roadmap and platform capabilities</p>
<p>Requirements: 5+ years of experience in data science, machine learning, quantitative engineering, or another highly analytical technical role Proven track record of shipping data, ML, or AI systems that delivered measurable business or product impact Exceptional ability to structure ambiguous problems, define the right success metrics, and translate them into executable technical plans Strong foundation in statistics, experimentation, causal reasoning, and measurement Experience building tools or products, not just analyses , for example internal workflow tools, evaluation systems, operator-facing products, experimentation platforms, or customer-specific applications Hands-on fluency in Python, SQL, and modern data/AI tooling; able to inspect data, prototype quickly, debug deeply, and productionize solutions that work Comfort using AI-assisted coding and development workflows to move from idea to usable product quickly Strong communication and stakeholder management skills; able to work effectively with customers, engineers, product teams, and executives High ownership and bias toward shipping in fast-moving environments with incomplete information</p>
<p>Preferred qualifications: Experience in a forward deployed, solutions, consulting, or other client-facing technical role Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</p>
<p>What success looks like: Success in this role means taking a messy, high-stakes customer problem and turning it into a deployed system that is actually used. Sometimes that system is a model. Sometimes it is an evaluation framework. Sometimes it is an operator-facing tool or a lightweight data product that changes how decisions get made. In all cases, success is defined by measurable impact, rigorous evaluation, and reliable execution.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>Salary Range: $167,200-$209,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$167,200-$209,000 USD</Salaryrange>
      <Skills>Python, SQL, Modern data/AI tooling, Statistics, Experimentation, Causal reasoning, Measurement, Data science, Machine learning, Quantitative engineering, Experience in a forward deployed, solutions, consulting, or other client-facing technical role, Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products, Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow, Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery, Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems, Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling, Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4636227005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58a44dab-91a</externalid>
      <Title>Partner Solutions Architect - Japan</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across Japan. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>
<p>You will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud.</p>
<p>Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing. This is not a purely reactive enablement role. The Partner SA is expected to help shape and execute repeatable partner plays that create revenue.</p>
<p>That includes enabling partner sellers and architects, supporting account mapping and seller-to-seller engagement, helping define joint value propositions, supporting partner-led pipeline generation, and influencing product and field strategy based on what is learned in-market.</p>
<p>Internal operating docs show this motion consistently includes enablement sessions, QBR sponsorships, account planning, workshops, field events, and targeted campaigns designed to produce sourced and influenced pipeline.</p>
<p>You&#39;ll be part of a team helping dbt scale its ecosystem through better partner capability, tighter field alignment, and more repeatable pipeline generation. The role is especially important as dbt continues investing in structured partner motions and deeper engagement with major cloud and data platform partners.</p>
<p>What you&#39;ll do:</p>
<ul>
<li>Partner closely with Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>
</ul>
<ul>
<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>
</ul>
<ul>
<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>
</ul>
<ul>
<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>
</ul>
<ul>
<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>
</ul>
<ul>
<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>
</ul>
<ul>
<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>
</ul>
<ul>
<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>
</ul>
<ul>
<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>
</ul>
<ul>
<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>
</ul>
<ul>
<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>
</ul>
<ul>
<li>Travel approximately 30-40% to support partner planning, enablement, executive meetings, and field events across Japan</li>
</ul>
<p>This scope reflects how the Partner SA team is already operating: enabling partner field teams, building account-level alignment, supporting QBRs and regional events, and translating those activities into sourced and engaged pipeline.</p>
<p>What you&#39;ll need:</p>
<ul>
<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>
</ul>
<ul>
<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>
</ul>
<ul>
<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>
</ul>
<ul>
<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>
</ul>
<ul>
<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>
</ul>
<ul>
<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>
</ul>
<ul>
<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>
</ul>
<ul>
<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>
</ul>
<ul>
<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>
</ul>
<ul>
<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>
</ul>
<p>What will make you stand out:</p>
<ul>
<li>Experience working directly in partner, alliance, or ecosystem roles</li>
</ul>
<ul>
<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>
</ul>
<ul>
<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>
</ul>
<ul>
<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>
</ul>
<ul>
<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>
</ul>
<ul>
<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>
</ul>
<ul>
<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>
</ul>
<ul>
<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>
</ul>
<p>What to expect in the interview process (all video interviews unless accommodations are needed):</p>
<ul>
<li>Interview with Talent Acquisition Partner</li>
</ul>
<ul>
<li>Interview with Hiring Manager</li>
</ul>
<ul>
<li>Team Interviews</li>
</ul>
<ul>
<li>Demo Round</li>
</ul>
<p>#LI-LA1</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner engineering, customer-facing technical role, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673657005</Applyto>
      <Location>Japan - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3168d7d3-70b</externalid>
      <Title>Partner Solutions Architect - North America</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across North America. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>
<p>As a Partner Solutions Architect, you will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud. Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing.</p>
<p>Responsibilities</p>
<ul>
<li>Partner closely with North America Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>
<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>
<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>
<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>
<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>
<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>
<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>
<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>
<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>
<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>
<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>
<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>
<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>
<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>
<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>
<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>
<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>
<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>
<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>
<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>
</ul>
<p>What will make you stand out</p>
<ul>
<li>Experience working directly in partner, alliance, or ecosystem roles</li>
<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>
<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>
<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>
<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>
<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>
<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>
<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation (and yes we use it!)</li>
<li>Pension coverage</li>
<li>Excellent healthcare</li>
<li>Paid Parental Leave</li>
<li>Wellness stipend</li>
<li>Home office stipend, and more!</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner development, field engineering, sales engineering, consulting, partner engineering, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a software company that provides an analytics engineering platform used by over 90,000 teams every week, driving data transformations and AI use cases. As of February 2025, they have surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673630005</Applyto>
      <Location>Canada - Remote; US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7ffabac7-275</externalid>
      <Title>Director, Solutions &amp; Forward Deployed Engineering</Title>
      <Description><![CDATA[<p>We are seeking a Director, Solution &amp; Forward Deployed Engineering to lead the technical delivery of the Zus platform and help customers successfully connect their systems, data, and applications to Zus.</p>
<p>Reporting to the Head of Customer Success &amp; Delivery, this role will own how customers integrate with the Zus platform. They’ll be responsible for ensuring healthcare organisations and digital health builders can reliably ingest data, connect EHR systems, and deploy applications powered by Zus APIs.</p>
<p>This leader will oversee teams responsible for forward deployment engineering, and technical enablement, working closely with customer engineering teams to integrate Zus into production environments and connect to data networks.</p>
<p>You will guide customers through the complexity of healthcare interoperability, helping them translate real-world workflows into scalable integrations built on Zus.</p>
<p>This is a hands-on player-coach role. You will lead the team while also personally engaging in complex implementations, architecture discussions, and customer deployments.</p>
<p>You will champion the use of AI tools, automation frameworks, and reusable integration patterns to dramatically improve how quickly and reliably customers connect to the Zus platform.</p>
<p>The ideal candidate combines deep experience in healthcare interoperability, enterprise software implementations, API platforms, and AI-enabled engineering workflows with the leadership skills required to scale a delivery organisation.</p>
<p>Key responsibilities:</p>
<ul>
<li>Lead implementation and technical delivery - Own the technical delivery lifecycle following contract signature through production deployment and early adoption</li>
<li>Lead and grow a team of Solutions Engineers and Forward Deployed Engineers - Develop career paths, performance expectations, and development plans for the team to ensure excellent execution of goals</li>
<li>Ensure consistent, high-quality execution across multiple concurrent enterprise implementations</li>
<li>Establish best practices for onboarding, implementation, integration, and go-live readiness</li>
<li>Set customers up for success across multiple different high priority use cases</li>
<li>Ensure customers achieve rapid time-to-value from the Zus platform</li>
</ul>
<p>Act as player-coach for complex implementations - Personally engage on strategic or technically complex customer deployments</p>
<p>Guide integrations involving FHIR, HL7, CCD, APIs, SFTP pipelines, and EHR platforms</p>
<p>Troubleshoot complex interoperability and data pipeline issues</p>
<p>Work directly with engineering teams to deploy and operationalize Zus products</p>
<p>Serve as a trusted technical advisor to customer technical and operational stakeholders</p>
<p>Drive forward deployed engineering - Support customers in building production-grade applications and workflows on top of Zus APIs</p>
<p>Help customers operationalize clinical and operational data across care delivery workflows</p>
<p>Lead the development of reference architectures and deployment patterns</p>
<p>Identify integration opportunities that accelerate product adoption and expansion</p>
<p>Delivery training and technical enablement - Oversee technical onboarding and training programs for new customers</p>
<p>Enable customer engineering and product teams to effectively build on the Zus platform</p>
<p>Develop documentation, workshops, and enablement resources for technical users</p>
<p>Drive AI-enabled implementation and automation - Lead the adoption of AI tools and automation frameworks across the delivery organisation</p>
<p>Identify opportunities to automate manual implementation work using LLMs, scripting, and developer tooling</p>
<p>Develop reusable automation patterns for all parts of the Zus ecosystem</p>
<p>Help customers leverage Zus data to power AI-enabled workflows and analytics applications</p>
<p>Partner with Product and Engineering - Translate customer implementation patterns into platform improvement</p>
<p>Participate in technical discussions to find reusable integration patterns that can be embedded directly into the Zus platform</p>
<p>Communicate customer needs to the Product &amp; Engineering teams</p>
<p>You&#39;re a good fit because you have:</p>
<ul>
<li>10+ years of experience in technical implementation, solutions engineering, systems integration, or professional services leadership, preferably in healthtech, SaaS, or enterprise software</li>
<li>Proven experience leading customer-facing teams and scaling implementation or professional services functions</li>
<li>Deep expertise in healthcare data interoperability, including FHIR, HL7, CCD, and EHR integrations</li>
<li>Strong understanding of APIs, data ingestion pipelines (ETL, JSON, CSV), and modern data platforms (e.g., Snowflake)</li>
<li>Experience designing scalable implementation frameworks and reusable integration patterns</li>
<li>Familiarity with secure environments and compliance frameworks (HIPAA, SOC 2)</li>
<li>Executive presence and the ability to build trust with both technical and non-technical stakeholders</li>
<li>Strong strategic thinking paired with a willingness to dive into complex technical or delivery challenges when needed</li>
<li>A self-starter mindset and comfort operating in a fast-paced, evolving startup environment</li>
<li>Passion for improving healthcare through better access to and use of data</li>
<li>Willingness to travel up to ~25% for customer engagements, industry events, and company meetings</li>
<li>Bachelor’s degree in Business, Engineering, or a related field (advanced degree a plus)</li>
</ul>
<p>Additional Information:</p>
<p>We will offer you...</p>
<ul>
<li>Competitive compensation that reflects the value you bring to the team a combination of cash and equity</li>
<li>Robust benefits that include health insurance, wellness benefits, 401k with a match, unlimited PTO</li>
<li>Opportunity to work alongside a passionate team that is determined to help change the world (and have fun doing it)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000-200,000 per year</Salaryrange>
      <Skills>Healthcare data interoperability, Enterprise software implementations, API platforms, AI-enabled engineering workflows, Leadership skills, FHIR, HL7, CCD, EHR integrations, APIs, Data ingestion pipelines, Modern data platforms, Scalable implementation frameworks, Reusable integration patterns, Secure environments, Compliance frameworks, Executive presence, Strategic thinking, Self-starter mindset, Passion for improving healthcare</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Zus</Employername>
      <Employerlogo>https://logos.yubhub.co/zus.com.png</Employerlogo>
      <Employerdescription>Zus is a shared health data platform designed to accelerate healthcare data interoperability by providing easy-to-use patient data via API, embedded components, and direct EHR integrations.</Employerdescription>
      <Employerwebsite>https://zus.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/zushealth/de7b4911-901f-4548-9d68-9b77c0ccf6b6</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
  </jobs>
</source>