<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>78270c8d-016</externalid>
      <Title>Operations Data Governance &amp; Controls Specialist</Title>
      <Description><![CDATA[<p>As an Operations Control Specialist – Data Governance &amp; Controls, you will design, implement, and support technical data governance solutions with a focus on the firm&#39;s Trader Master and related reference data domains.</p>
<p>This role requires a strong technical background in Data Management, Data Architecture, Data Lineage, Data Quality, Master Data Management (MDM), and automation within Financial Services and/or Technology.</p>
<p>You will contribute to and help lead the technical design of data governance controls, data models, and integration patterns, partnering closely with Technology and Operations teams.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build/enhance data governance frameworks, controls, standards, and workflows (policies, definitions, entitlements).</li>
<li>Create data quality rules and monitoring; automate exception detection, alerting, remediation, SLAs, and RCA.</li>
<li>Develop Python/SQL/ETL-ELT automation for checks, controls, and reporting; deliver Tableau/Power BI dashboards and KPIs.</li>
<li>Contribute to conceptual/logical/physical data modeling for Trader Master and core domains.</li>
<li>Support MDM capabilities: golden record, matching/merging, survivorship, stewardship workflows; help shape MDM strategy.</li>
<li>Implement access/entitlement governance (RBAC, row/column security) across DB/warehouse/BI with audit compliance.</li>
<li>Maintain catalog, glossary, lineage, schema history, impact analysis; manage structured change workflows.</li>
<li>Define integration patterns (batch/API/streaming) and build reconciliations/validations across systems.</li>
<li>Manage historical/temporal data (validation, backfills, remediation) supporting regulatory/reporting/analytics.</li>
<li>Produce technical documentation (designs, runbooks, data dictionaries), share knowledge, and mentor juniors.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Engineering, Information Systems, Mathematics, Finance, or related field; advanced degree (MS, MBA, or equivalent) is a plus.</li>
<li>5–8 years of experience in financial services or fintech with hands-on work in data engineering, data management, or data architecture roles; exposure to trading strategies, fund structures, and financial products strongly preferred.</li>
</ul>
<p>Technical Expertise (Required):</p>
<ul>
<li>Strong Python and SQL; experience with data warehousing + ETL/ELT.</li>
<li>Familiarity with MDM/data governance tools (e.g., Collibra, Informatica, Alation) and Tableau/Power BI.</li>
<li>Proven ability to lead delivery, solve complex data issues, and communicate with technical/non-technical stakeholders.</li>
<li>Preferred certs: DAMA/CDMP, cloud (AWS/Azure/GCP), Scrum, BI/data engineering.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>The estimated base salary range for this position is $70,000 to $160,000, which is specific to New York and may change in the future.</p>
<p>When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$70,000 to $160,000</Salaryrange>
      <Skills>Python, SQL, ETL/ELT, Data Warehousing, Tableau/Power BI, MDM/data governance tools, Collibra, Informatica, Alation, DAMA/CDMP, cloud (AWS/Azure/GCP), Scrum, BI/data engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Ops &amp; MO Control</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Ops &amp; MO Control provides data governance and control services.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954926796</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5dfa9c86-5c0</externalid>
      <Title>Director, US Forecasting &amp; Analytics– Vaccines &amp; Immune Therapies</Title>
      <Description><![CDATA[<p>Director, US Forecasting &amp; Analytics – Vaccines &amp; Immune Therapies Global Insights, Analytics &amp; Forecasting, BBU Hybrid Work- on average 3 days a week from office</p>
<p>The Director, US Forecasting &amp; Analytics – Vaccines &amp; Immune Therapies is a senior commercial insights leader responsible for US demand forecasting and analytics across the V&amp;I portfolio. The role is predominantly forecast-focused, serving as the US forecasting lead and strategic thought partner to Marketing, Finance, Market Access, and US teams.</p>
<p>Responsibilities:</p>
<p>US Forecasting Leadership (Core Accountability)</p>
<ul>
<li>Lead US short-term and long-term demand forecasts (TRx, NBRx, volume, patients, revenue) for V&amp;I assets using robust, patient-based and market-based models</li>
<li>Own forecast methodology, assumptions, and governance, ensuring objectivity, transparency, and consistency with enterprise standards</li>
<li>Integrate primary market research, epidemiology, competitive intelligence, access dynamics, and real-world data into forecast models</li>
<li>Proactively identify and quantify key risks and opportunities through scenario and sensitivity analyses</li>
<li>Partner closely with Finance, Market Access &amp; Pricing, Marketing, Sales, Medical, and Global Forecasting to ensure alignment on assumptions and implications</li>
<li>Support business planning, governance reviews, and opportunity assessments with clear, executive-ready narratives</li>
<li>Serve as a trusted advisor to senior marketing and finance leadership, clearly articulating forecast drivers and changes</li>
</ul>
<p>Analytics &amp; Resource Leadership (Enablement)</p>
<ul>
<li>Provide leadership over forecasting-adjacent analytics, ensuring advanced analytics and insights are embedded into forecasting and business planning</li>
<li>Manage and prioritize internal analysts, contractors, and external vendors supporting forecasting and analytics deliverables</li>
<li>Partner with data analytics resources, Global IA&amp;F, and GIBEX capability teams to deploy new tools, data sources, and modeling approaches</li>
<li>Champion and identify new ways to embed AI and advanced automation into the practice of data analytics and forecasting to drive efficiency, scalability, and decision quality</li>
<li>Champion continuous improvement in forecasting processes, AI-enabled modeling, and automation</li>
<li>Contribute to the development and sharing of best practices across the V&amp;I forecasting community</li>
</ul>
<p>Essential for the role</p>
<ul>
<li>Bachelor’s degree in a quantitative, scientific, or business-related field required (e.g., Statistics, Economics, Mathematics, Engineering, Computer/Data Science).</li>
<li>8+ years’ experience in US pharmaceutical commercial forecasting, including in-market and late-stage pipeline assets</li>
<li>Hands-on model ownership experience (build, refresh, and performance tracking) across short- and long-term horizons</li>
<li>Expertise in scenario-based forecasting, sensitivity analysis, and driver-based narratives to support senior decision-making</li>
<li>Strong capability integrating multiple data types (e.g., IQVIA, claims, epidemiology, RWD/RWE, primary research) into coherent, decision-grade forecasts</li>
<li>Working knowledge of advanced analytics/ML approaches (e.g., time series, causal inference, ensembles) and where they add value vs. traditional methods</li>
<li>Fluency in modern analytics tooling and automation (e.g., Python/R/SQL, BI/visualization), with ability to partner effectively with data engineering and analytics teams</li>
<li>Demonstrated forecast governance and model risk discipline (traceable assumptions, documentation, and clear explanations)</li>
<li>Strong understanding of US market access and payer dynamics and how they impact demand (coverage, contracting, channel, policy)</li>
<li>Exceptional communication: translates complex analysis into clear, executive-ready insights, options, and recommendations</li>
<li>Strong commercial competence across key demand levers (positioning, adoption, competitive dynamics, lifecycle events)</li>
</ul>
<p>Desirable for the role</p>
<ul>
<li>Advanced degree preferred (e.g., MBA, MS, PhD in Statistics, Economics, Decision Sciences, Data Science, or related discipline).</li>
<li>Vaccines and/or Rare Disease experience, including familiarity with immunization dynamics, patient-based forecasting, and lifecycle management in preventive or immune-mediated therapies</li>
<li>Change leadership: builds adoption for new tools, processes, and ways of working across cross-functional stakeholders</li>
<li>Product mindset for forecasting: defines user needs, success metrics, and a roadmap for portfolio forecasting capabilities</li>
<li>Model lifecycle practices (e.g., reproducibility, versioning, monitoring/drift awareness); familiarity with MLOps concepts</li>
</ul>
<p>Office Working Requirements</p>
<p>When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That’s why we work, on average, a minimum of three days per week from the office. But that doesn’t mean we’re not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world.</p>
<p>#LI-Hybrid</p>
<p>Date Posted 10-Apr-2026 Closing Date 23-Apr-2026 Our mission is to build an inclusive environment where equal employment opportunities are available to all applicants and employees. In furtherance of that mission, we welcome and consider applications from all qualified candidates, regardless of their protected characteristics. If you have a disability or special need that requires accommodation, please complete the corresponding section in the application form.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>forecasting, analytics, model ownership, scenario-based forecasting, sensitivity analysis, driver-based narratives, advanced analytics, machine learning, Python, R, SQL, BI/visualization, data engineering, forecast governance, model risk discipline, US market access, payer dynamics, exceptional communication, commercial competence, vaccines, rare disease, change leadership, product mindset, model lifecycle practices, MLOps</Skills>
      <Category>Finance</Category>
      <Industry>Healthcare</Industry>
      <Employername>Global Insights, Analytics &amp; Forecasting - V&amp;I</Employername>
      <Employerlogo>https://logos.yubhub.co/astrazeneca.eightfold.ai.png</Employerlogo>
      <Employerdescription>AstraZeneca&apos;s Global Insights, Analytics &amp; Forecasting - V&amp;I division focuses on providing insights and analytics to support vaccine and immune therapy development.</Employerdescription>
      <Employerwebsite>https://astrazeneca.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689756206</Applyto>
      <Location>Wilmington, Delaware, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7275ef33-009</externalid>
      <Title>Staff Data Engineer</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>
<p>Communicating Between Technical and Non-Technical Colleagues</p>
<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>
<p>Data Analysis and Synthesis</p>
<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>
<p>Data Development Process</p>
<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>
<p>Data Innovation</p>
<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>
<p>Data Integration Design</p>
<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>
<p>Data Modeling</p>
<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>
<p>Metadata Management</p>
<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>
<p>Problem Resolution</p>
<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>
<p>Programming and Build</p>
<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>
<p>Technical Understanding</p>
<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>
<p>Testing</p>
<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,400 to $171,600</Salaryrange>
      <Skills>Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor&apos;s degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops and manufactures a wide range of healthcare products.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976928777</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21f5f6c3-734</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>
<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>
<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>
<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>
<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>
<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>
<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>
<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>
<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>
<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>
<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>
<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>
<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 70000–90000 / year</Salaryrange>
      <Skills>Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally, 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/data-engineer</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b2aae11e-f20</externalid>
      <Title>Sr Genome Editing Operations Scientist</Title>
      <Description><![CDATA[<p>As a Genome Editing Operations Scientist at Bayer Crop Science, you will guide the development of an increasingly efficient gene editing pipeline by building connected data systems that drive decisions. You will connect disparate data sources and leverage key advancement data to group projects, reagents, and samples, using this connected data system to deliver models that optimize resource use and pipeline capacity by integrating data awareness across lab, greenhouse, and field operations.</p>
<p>Your primary responsibilities will be to:</p>
<ul>
<li>Guide the development of highly connected data systems that enable data-driven, model-based analytics to improve pipeline effectiveness and efficiency;</li>
<li>Work with multifunctional teams to enable data connectivity across the editing pipeline, integrating information from lab, greenhouse, and field operations;</li>
<li>Collaborate with partner teams across Crop Science (Gene Editing, IT Enterprise, Data and Engineering) to automate decision making and improve operational efficiency to accelerate development of gene-edited products;</li>
<li>Serve as a key communicator translating business data knowledge and operational workflows into clear technical implementation plans for data scientists, data engineers, and developers;</li>
<li>Demonstrate autonomy in building relationships and networks within your unit and across functions, most often with members of the Crop Genome Editing team and closely aligned partner teams;</li>
<li>Act as a consultant to leadership and colleagues on digital strategy and data-driven operations through clear, organized, and influential communication;</li>
<li>Actively build your own acumen in biology, genome design, and digital operations while sharing best practices and learnings with the broader Biology and Genome Design community.</li>
</ul>
<p>We seek an incumbent who possesses the following qualifications:</p>
<ul>
<li>PhD in Computational Biology, Computer Science and Engineering, or another relevant scientific field with a minimum of 6 years of relevant experience, or MS with 10+ years of relevant experience;</li>
<li>Demonstrated track record developing data systems and pipelines that enable efficient product delivery and operational modeling;</li>
<li>Demonstrated experience working collaboratively in cross-functional and cross-cultural teams to achieve common goals;</li>
<li>Demonstrated experience leading and influencing activities of cross-functional teams without direct reporting relationships;</li>
<li>Ability to lead and influence key stakeholders through challenges and opportunities and to facilitate solutions.</li>
</ul>
<p>Preferred qualifications include experience building data pipelines as a ML DevOps Engineer or Data Engineer, experience with Operations Research, and experience analyzing large biological datasets and developing analytical pipelines using Python, R, or similar software and languages.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$114,400.00 - $171,600.00</Salaryrange>
      <Skills>Computational Biology, Computer Science and Engineering, Data Systems, Pipeline Development, Collaboration, Communication, Digital Strategy, Data-Driven Operations, ML DevOps Engineer, Data Engineer, Operations Research, Python, R, Cloud Development Environments</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Bayer Crop Science</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer Crop Science develops crop protection and biotechnology products for agriculture.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976597728</Applyto>
      <Location>Chesterfield</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>02c944ab-f9e</externalid>
      <Title>Senior Data Scientist - Dynamic Pricing &amp; Revenue Management (all genders)</Title>
      <Description><![CDATA[<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Analyst and Data Engineer. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic and data driven pricing strategy.</p>
<p>You&#39;ll work with a large and rich dataset, modern tooling, and teammates who care deeply about impact, collaboration, and learning together. This role is based in Munich with 3 office days per week.</p>
<p>As a Senior Data Scientist, you&#39;ll take ownership of complex pricing and forecasting models and help us turn analytical ideas into real-world impact for hosts and Holidu. You will:</p>
<ul>
<li>Translate business questions into scientific, testable models and clear recommendations.</li>
<li>Design, build and own machine learning, forecasting and predictive models for revenue management topics such as demand forecasting, price sensitivity, and conversion probability.</li>
<li>Explore and develop dynamic pricing strategies (e.g. weekend pricing, early discounts, regional similarities) using data and experimentation.</li>
<li>Collaborate closely with Data Analysts and Data Engineers to define datasets, features, and model requirements.</li>
<li>Drive discussions around model choice, assumptions, and trade-offs, always keeping business impact in mind.</li>
<li>Monitor model performance, iterate on results, and continuously improve accuracy and relevance.</li>
<li>Act as a senior sparring partner in the team, sharing knowledge and raising the bar for data science practices.</li>
</ul>
<p>You&#39;ll have 5+ years of experience as a Data Scientist, solving a variety of different business problems. You&#39;ll have a strong background in statistics, forecasting, and machine learning. You&#39;ll be hands-on with Python and SQL, and confident working with large datasets. You&#39;ll have a strong interest in pricing, revenue optimization, or marketplace dynamics (prior revenue management experience is a plus, not a must).</p>
<p>You&#39;ll be a self-starter: proactive, hungry to learn, and eager to make an impact. You&#39;ll be able to communicate complex ideas clearly and collaborate with technical and non-technical partners.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Machine Learning, Forecasting, Predictive Modeling, Data Science, Data Analysis, Data Engineering, Dynamic Pricing, Revenue Optimization, Marketplace Dynamics, Cloud Computing, Big Data, Data Visualization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides a search engine for vacation rentals. It was founded in 2014 and has since grown to become one of the leading players in the market.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2518625</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5d48ddb1-b45</externalid>
      <Title>Mission Software Engineering Manager, Public Sector</Title>
      <Description><![CDATA[<p>We are looking for a Mission Software Engineering Manager to join our dynamic Federal Engineering team. As a part of this team, you will play a critical role in supporting Scale&#39;s government customers by scoping and developing onsite solutions.</p>
<p>Our scalable, high-performance platform is the foundation for these customer solutions, and your expertise will be instrumental in designing and implementing systems that can handle interactions with existing customer systems to help our products integrate into existing customer workflows.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Recruit a high-performing engineering team.</li>
<li>Drive engineering productivity. Provide guidance, mentorship, and technical leadership to a team of engineers working on Generative AI projects.</li>
<li>Collaborating with cross-functional teams to define, design, and execute strategic roadmap.</li>
<li>Work directly with customers to understand their problems and translate those into features in Scale’s platform.</li>
<li>Be open to ~25% travel or relocation to a key customer geographic location.</li>
<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
<li>Implement end-to-end data integrations, syncing customer’s data to Scale’s platform and back.</li>
<li>Deploy and maintain Scale software at customer sites</li>
<li>Develop customer requested features and work closely with them to ensure that they win customer love.</li>
<li>Build robust and reliable backend systems that can serve as standalone products, empowering customers to accelerate their own AI ambitions.</li>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of full-time engineering experience, post-graduation</li>
<li>2+ years of prior engineering management or equivalent experience and has managed an engineering team.</li>
<li>Track record of success as a hybrid customer facing engineer, forward deployed software engineer, and ability to quickly adapt to different roles.</li>
<li>Prior experience developing with Python and JavaScript, or other modern software languages. Familiarity with Node and React is a plus.</li>
<li>Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus</li>
<li>Linux experience: Understanding of shell scripting, operating systems, etc</li>
<li>Networking experience: Understanding of networking technologies, configuration (ports, protocols, etc)</li>
<li>Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles</li>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles</li>
<li>Understand unique DoD and USG constraints when it comes to technology</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$273,700-$341,550 USD</Salaryrange>
      <Skills>Python, JavaScript, Cloud-Native Technologies, Linux, Networking, Data Engineering, Problem Solving, Node, React, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4631039005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3fa0b80f-842</externalid>
      <Title>Staff Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>Job Title: Staff Software Engineer, Public Sector</p>
<p>We are seeking a highly skilled Staff Software Engineer to join our Public Sector team. As a Staff Software Engineer, you will be responsible for designing and implementing software solutions for the public sector. You will work closely with cross-functional teams to develop and deploy software applications that meet the needs of government agencies.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement software solutions for the public sector</li>
<li>Work closely with cross-functional teams to develop and deploy software applications</li>
<li>Collaborate with stakeholders to understand their needs and develop software solutions that meet those needs</li>
<li>Develop and maintain software documentation</li>
<li>Participate in code reviews and ensure that code meets quality standards</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or related field</li>
<li>5+ years of experience in software development</li>
<li>Proficiency in programming languages such as Java, Python, or C++</li>
<li>Experience with Agile development methodologies</li>
<li>Strong understanding of software design patterns and principles</li>
<li>Excellent communication and collaboration skills</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s degree in Computer Science or related field</li>
<li>10+ years of experience in software development</li>
<li>Experience with cloud-based technologies such as AWS or Azure</li>
<li>Experience with DevOps practices</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p>Salary Range: $252,000-$362,000 USD</p>
<p>Required Skills:</p>
<ul>
<li>Full Stack Development</li>
<li>Cloud-Native Technologies</li>
<li>Data Engineering</li>
<li>AI Application Integration</li>
<li>Problem Solving</li>
<li>Collaboration and Communication</li>
<li>Adaptability and Learning Agility</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with modern web development frameworks</li>
<li>Familiarity with cloud platforms</li>
<li>Understanding of containerization and container orchestration</li>
<li>Knowledge of ETL processes</li>
<li>Understanding of data modeling, data warehousing, and data governance principles</li>
<li>Familiarity with integrating Large Language Models</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$362,000 USD</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Experience with modern web development frameworks, Familiarity with cloud platforms, Understanding of containerization and container orchestration, Knowledge of ETL processes, Understanding of data modeling, data warehousing, and data governance principles, Familiarity with integrating Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674913005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ad5c420d-b2d</externalid>
      <Title>Senior Solutions Architect - Lakebase</Title>
      <Description><![CDATA[<p>The Solutions Architect (Lakebase) team executes on Databricks&#39; strategic Product Operating Model that provides enhanced focus on earlier stage, highly prioritised product lines in order to establish product market fit, and set the course for rapid revenue growth.</p>
<p>They are part of a global go-to-market team mandate, though individually will cover a specific, local region. Clients may span across one or more business units and verticals.</p>
<p>By working in partnership with direct account teams, they will jointly engage clients, foster the necessary relationships, position in-depth the specific product line, so as to provide compelling reasons for clients to adopt and grow the usage of the given product.</p>
<p>The Solutions Architect (Lakebase) is paired with an Account Executive aligned to a given product line with specific targets accordingly. Together, they will devise and implement a strategy across their assigned set of accounts, develop presentations, demos and other assets and deliver them such that clients make an informed decision as they decide to adopt the product-line in a meaningful way.</p>
<p>The Lakebase product-line requires the following core technical competencies:</p>
<ul>
<li>10+ years of transactional database (OLTP) expertise across engineering, product development, administration, and pre-sales, with a proven track record of designing and delivering client-facing solutions.</li>
<li>Credibility in influencing OLTP products with the market insight needed to shape and prioritise roadmap capabilities.</li>
<li>Experience architecting solutions that integrate transactional data systems within broader Big Data, Lakehouse, and AI ecosystems.</li>
<li>Infrastructure, platform and administration expertise around disaster recovery, high availability, backup and recovery, scale-out methods, identity and security management, migrations (vendor-to-vendor, on-prem to cloud)</li>
</ul>
<p>Impact</p>
<p>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</p>
<p>As a trusted advisor, serve as an expert Solutions Architect and &quot;champion,&quot; building technical credibility with stakeholders to drive product adoption and vision.</p>
<p>Enable clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</p>
<p>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams</p>
<p>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical environments.</p>
<p>Competencies &amp; Responsibilities</p>
<ul>
<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>
<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>
<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organisations to drive customer outcomes.</li>
<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>
<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>
<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>
<li>Undergraduate degree (or higher) in a technical field such as Computer Science, Applied Mathematics, Engineering or similar.</li>
<li>A track record of driving complex projects to completion in fast-paced, customer-facing environments.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Transactional database (OLTP), Cloud infrastructure, Data engineering, Data warehousing, AI, ML, Governance, Transactional systems, App development, Streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8407181002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>04c1ff49-2d1</externalid>
      <Title>Data Platform Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Data Platform Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 10% of the time</li>
</ul>
<p>[Preferred] Databricks Certification but not essential</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, technical project delivery, documentation and white-boarding skills, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8396801002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>859ea28b-09c</externalid>
      <Title>Sr. Manager, Field Engineering - Public Sector</Title>
      <Description><![CDATA[<p>We are seeking a Senior Manager, Solutions Architects to lead our Federal System Integrators and Defense Industrial Base (FSI/DIB) team. You will lead and promote a dynamic team focusing on enterprise software, big data/analytics, data engineering, data science, data warehousing, and generative AI.</p>
<p>Leading the technical sales team, you will partner with Sales (and other Field Engineering technical segments) to increase revenue and help customers become wildly successful. You&#39;ll scale and maintain an outstanding Field Engineering team that operates efficiently to help accelerate Databricks&#39; market growth.</p>
<p>The Impact You Will Have:</p>
<ul>
<li>Alongside the sales Director, you will set the vision and strategy for how this team will accelerate transformative outcomes for the FSIs and DIBs customers</li>
<li>You will hire, train, lead, and grow the Solutions Architect team for a company in high-growth mode</li>
<li>Help your customers achieve exceptional success with Databricks and deliver substantial value to their businesses</li>
<li>You will maintain a robust hiring pipeline at all times</li>
<li>Establish relationships across the business to make your customers and team successful</li>
<li>Keep your team of SAs ahead of the technical curve</li>
</ul>
<p>An SA adds value by maintaining advanced knowledge of the technology stack. You will make sure that your team is continuously learning and working to provide our customers with the most comprehensive solutions for their needs</p>
<p>What We Look For:</p>
<ul>
<li>Proven experience building and leading technical pre-or post-sales teams</li>
<li>5+ years of experience in the data space with a technical product (i.e., data warehousing, big data, machine learning, or more recently with generative AI)</li>
<li>Alternatively, 5+ years of experience in mission-focused space impacting DIB or FSI from a technology perspective</li>
<li>Trusted advisor to technical executives that guide strategic data infrastructure decisions</li>
<li>Lead a team through best practices for technical qualification, technical validations, architecture discussions, and product demonstrations</li>
<li>Strategic mindset focused on driving customer outcomes in an accelerated fashion</li>
<li>Experience hiring candidates that continually raise the bar, ramping them up to be successful, and promoting into larger roles</li>
<li>Create a positive morale for the team and help foster a working relationship between Field Engineering, Sales, and other important internal cross-functional teams</li>
<li>Experience working cross-functionally with other teams such as Sales, Product Management, Engineering, and Customer Success</li>
<li>Technical and business skills to earn the trust of Engineering talent and leadership at Databricks</li>
<li>Demonstrated architectural influence. Able to influence and review complex architectures; guiding your team and customers towards ideal solutions</li>
</ul>
<p>Pay Range Transparency: Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here. Local Pay Range $192,100-$264,175 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,100-$264,175 USD</Salaryrange>
      <Skills>data engineering, data science, data warehousing, generative AI, enterprise software, big data/analytics, technical sales team management, cross-functional team management, strategic planning, hiring and talent development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company providing a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8332835002</Applyto>
      <Location>Maryland; Virginia; Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f86a39bf-9a5</externalid>
      <Title>Solutions Architect - Digital Native Business, Strategic</Title>
      <Description><![CDATA[<p>As a Solutions Architect on the Digital Natives team, you will work with leading data engineering, data science, and ML teams to push the boundaries of what big data architectures are capable of.</p>
<p>Reporting to the Field Engineering Manager, you will collaborate with strategic customers, product teams, and the broader customer-facing team to develop architectures and solutions using our platform and APIs.</p>
<p>You will guide customers through the competitive landscape, best practices, and implementation; and develop technical champions along the way.</p>
<p>We are looking for high technical aptitude individuals with a deep sense of ownership and a desire to help customers ship solutions at production scale.</p>
<p>Ideal candidates are deeply curious, capable of operating with confidence in ambiguous situations, and are extremely adaptable.</p>
<p>The impact you will have:</p>
<ul>
<li>Partner with the sales team and provide technical leadership to help customers understand how Databricks can help solve their business problems.</li>
</ul>
<ul>
<li>Drive technical discovery and solution design, focusing on winning competitive deals and accelerating time-to-value in strategic accounts.</li>
</ul>
<ul>
<li>Continuously research &amp; learn new technologies and their implementations on Databricks.</li>
</ul>
<ul>
<li>Consult on Big Data architectures, implement proof of concepts for strategic projects, spanning data engineering, data science, and machine learning, and SQL analysis workflows.</li>
</ul>
<ul>
<li>As well as validating integrations with cloud services, home-grown tools, and other 3rd party applications.</li>
</ul>
<ul>
<li>Collaborate with your fellow Solutions Architects, using your skills to support each other and our customers.</li>
</ul>
<ul>
<li>Become an expert in, promote, and recruit contributors for Databricks-inspired open-source projects (Spark, Delta Lake, and MLflow) across the developer community.</li>
</ul>
<ul>
<li>Work closely with account executives to create and execute account penetration strategies, focusing on winning technical decision-makers and building new customer champions.</li>
</ul>
<ul>
<li>Build trusted advisor relationships with senior and executive stakeholders by articulating the business value of Databricks in clear, outcomes-driven terms.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years in a data engineering, data science, technical architecture, or similar pre-sales/consulting role.</li>
</ul>
<ul>
<li>Experience building distributed data systems.</li>
</ul>
<ul>
<li>Comfortable programming in, and debugging, Python and SQL.</li>
</ul>
<ul>
<li>Have built solutions with public cloud providers such as AWS, Azure, or GCP.</li>
</ul>
<ul>
<li>Expertise in one of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (Ex: Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Science and Machine Learning technologies (Ex: pandas, scikit-learn, pytorch, Tensorflow)</li>
</ul>
<ul>
<li>Strong executive presence with the ability to influence C/VP-level stakeholders and align technical solutions to strategic business priorities.</li>
</ul>
<ul>
<li>Available to travel to customers in your region.</li>
</ul>
<ul>
<li>[Desired] Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research).</li>
</ul>
<ul>
<li>Nice to have: Databricks Certification.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Data Engineering technologies, Data Science and Machine Learning technologies, Python, SQL, Cloud providers (AWS, Azure, GCP)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8434467002</Applyto>
      <Location>Remote - California; Remote - Colorado; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1f35a9c-2e5</externalid>
      <Title>Staff Data Scientist</Title>
      <Description><![CDATA[<p>We are looking for a Staff Product Data Scientist to join our Data &amp; Insights team. This role will lead the strategy, development, and operationalization of advanced analytics and machine learning solutions that power data intelligence for Okta’s Product Management teams, uncovering actionable insights on customer behavior, product engagement, and opportunities that will drive our product growth strategy.</p>
<p>You’ll define the long-term data science roadmap while partnering closely with Okta’s diverse Product Management teams and executive leadership.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Collaborate closely with Product Managers and leadership to provide insights that will help drive product strategy and roadmaps, ensuring decisions are grounded in a data-driven approach.</li>
<li>Build and deploy statistical and machine learning models (e.g., predicting churn, lifetime value, feature adoption) to forecast user behavior and product growth opportunities in order to help influence roadmap decisions.</li>
<li>Conduct thorough exploratory analysis on extensive, complex datasets to uncover key drivers of user adoption and engagement, identifying unseen opportunities for significant product improvement.</li>
<li>Work with data engineering, analytics engineering, and data analysts to shape and enhance the foundational data infrastructure necessary for scalable ML and advanced analytics initiatives.</li>
<li>Develop and construct usable data sets by integrating and manipulating information from various disparate data sources as needed.</li>
<li>Translate intricate data insights into clear, compelling narratives for executives, product managers, and engineers, effectively influencing crucial business and product decisions.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>7+ years of experience in data science or ML, including 3–5 years in a senior, staff, or technical leadership role.</li>
<li>Deep understanding of product analytics and data science frameworks, with a proven history of designing and analyzing product data to solve complex, ambiguous business problems and deliver measurable results.</li>
<li>Demonstrated ability to apply cutting-edge AI tools to accelerate the discovery of deep, actionable insights from complex product data.</li>
<li>Demonstrated ability to translate data insights into product impact and formulate strategic, data-driven recommendations.</li>
<li>Deep expertise in machine learning algorithms (supervised, unsupervised, NLP, forecasting, optimization) and statistical modeling.</li>
<li>Strong proficiency in Python, SQL, and leading ML libraries.</li>
<li>Excellent communication skills and a proven ability to influence Product and executive partners.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience in high-growth SaaS, cybersecurity, identity, or enterprise software environments.</li>
<li>Prior ownership of ML platform/tooling decisions and evaluations.</li>
<li>Experience enabling self-service analytics or citizen data science capabilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$184,000-$253,000 USD</Salaryrange>
      <Skills>data science, machine learning, Python, SQL, statistical modeling, data engineering, analytics engineering, data analysts, high-growth SaaS, cybersecurity, identity, enterprise software</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a leading independent identity provider that secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7731595</Applyto>
      <Location>Bellevue, Washington; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5b244f27-9fd</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases. You will work with engagement managers to scope variety of professional services work with input from the customer.</p>
<p>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications. Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</p>
<p>Provide an escalated level of support for customer operational issues. You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</p>
<p>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</p>
<p>The ideal candidate will have 6+ years experience in data engineering, data platforms &amp; analytics, comfortable writing code in either Python or Scala, working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one, deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals, familiarity with CI/CD for production deployments, working knowledge of MLOps, design and deployment of performant end-to-end data architectures, experience with technical project delivery - managing scope and timelines, documentation and white-boarding skills, experience working with clients and managing conflicts, build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</p>
<p>Travel to customers 20% of the time.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461258002</Applyto>
      <Location>Raleigh, North Carolina</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>63a79841-36e</externalid>
      <Title>Solutions Architect (Vietnam)</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re seeking a Solutions Architect to join our Field Engineering team in Vietnam. As a key member of our team, you will work closely with customers to understand their complex data challenges and provide technical expertise to demonstrate how our Data Intelligence Platform can help them solve these issues.</p>
<p>You will form successful relationships with clients throughout Vietnam, providing technical and business value to Databricks customers in collaboration with Account Executives. You will operate as an expert in big data analytics, developing into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Developing customer relationships and building internal partnerships with account executives and teams</li>
<li>Engaging customers in technical sales, challenging their questions, guiding clear outcomes, and communicating technical and value propositions</li>
<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark</li>
<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s)</li>
<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth</li>
<li>Proficiency in the Vietnamese language is required as this role serves clients based in Vietnam and involves direct customer communications in the Vietnamese language</li>
</ul>
<p>In return, you will have the opportunity to grow your knowledge and expertise to the level of a technical and/or industry specialist, and contribute to the success of our customers and the growth of our organization.</p>
<p>If you&#39;re passionate about working with data and AI, and want to make a real impact, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, Big Data Analytics, Spark, Cloud Computing, Data Science, Machine Learning, Data Engineering, Data Architecture, Cloud Security, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. Over 10,000 organizations worldwide rely on its Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8472732002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e5fa8591-cb8</externalid>
      <Title>Solutions Architect: Data &amp; AI</Title>
      <Description><![CDATA[<p>As a Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field.</p>
<p>You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>
<p>The impact you will have:</p>
<ul>
<li>You will be a Big Data Analytics expert on aspects of architecture and design</li>
<li>Lead your clients through evaluating and adopting Databricks including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>
<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>
<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>
<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>
</ul>
<p>Together with your Account Executive, you will form successful relationships with clients throughout your assigned territory to provide technical and business value</p>
<p>What we look for:</p>
<ul>
<li>Strong consulting / customer facing experience, working with external clients across a variety of industry markets</li>
<li>Core strength in either data engineering or data science technologies</li>
<li>8+ years of experience demonstrating technical concepts, including demos, presenting and white-boarding</li>
<li>8+ years of experience designing architectures within a public cloud (AWS, Azure or GCP)</li>
<li>6+ years of experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>
<li>Coding experience in Python, R, Java, Apache Spark or Scala</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>
<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p>Compliance</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, R, Java, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8353757002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>43334479-97e</externalid>
      <Title>Sr Analytics Engineer - GTM Strategy and Operations</Title>
      <Description><![CDATA[<p>As a Senior Analytics Engineer, you will be a critical partner to the Global GTM Strategy &amp; Operations teams, providing the data, AI-driven insights, and infrastructure needed to drive efficiency and effectiveness across the organization.</p>
<p>You will design, build, and maintain scalable data models, curated reporting tables, forecasts, and dashboards that support everyone from senior executives to individual contributors, empowering them to make informed decisions and spend more time driving customer outcomes.</p>
<p>Working closely with cross-functional stakeholders,including Sales, Finance, Marketing, and other data teams,you will tackle complex data challenges by leveraging structured data, building AI-powered querying assistants, and using tools like Databricks Genie to improve data accessibility, streamline insights, and deliver actionable, reliable solutions across the business.</p>
<p>You will also play a key role in advancing our newly created AI initiatives and semantic data curation efforts, helping to establish a strong foundation for advanced analytics, automation, and scalable business intelligence.</p>
<p>The Impact You Will Have:</p>
<ul>
<li>Build: You will design and develop analytic tools, including a semantic layer for AI use cases, scalable data models, curated tables, and insightful analyses that empower thousands of field employees and leaders worldwide.</li>
</ul>
<ul>
<li>Architect: You will both manage the requirements gathering and lead execution of strategic analytic projects.</li>
</ul>
<ul>
<li>Scale: You will build and manage relationships with stakeholders across the company but primarily with the GTM strategy and operations team</li>
</ul>
<p>What we look for:</p>
<ul>
<li>You have 4+ years of experience working as an Analyst / Data Engineer / Analytics Engineer with B2B sales, marketing, or finance data (GTM experience highly preferred).</li>
</ul>
<ul>
<li>You are data-savvy with 3+ years of SQL and 2+ years of Python experience. Familiarity with data ecosystems and BI tools (e.g., Databricks, PowerBI) is required.</li>
</ul>
<ul>
<li>You have built for scale. You have experience building scalable and productionizable data models with best practices in mind.</li>
</ul>
<ul>
<li>You integrate AI into your daily workflow. You have hands-on experience using large language model tools (such as Claude or similar) to accelerate analytics work , from drafting and debugging code to synthesizing requirements and generating documentation.</li>
</ul>
<ul>
<li>You&#39;re comfortable evaluating AI-generated outputs critically and iterating quickly.</li>
</ul>
<ul>
<li>You are passionate about applying AI to transform GTM teams. You bring experience in delivering AI-driven solutions and have the ability to design innovative use cases as well as structure data models and tables that are optimized for AI readiness.</li>
</ul>
<ul>
<li>You excel in partnering with the business, understanding the impact of your work on GTM, and creating innovative solutions.</li>
</ul>
<ul>
<li>You have a track record of cross-functional collaboration and strong stakeholder relationships.</li>
</ul>
<ul>
<li>You excel in a collaborative environment. You translate team member needs into clear tasks and deliverables for contributors.</li>
</ul>
<ul>
<li>You work through dependencies, bottlenecks, and tradeoffs with ease.</li>
</ul>
<ul>
<li>You have a service-oriented mindset.</li>
</ul>
<ul>
<li>You are curious, creative, and kind.</li>
</ul>
<p>Pay Range Transparency:</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $133,000-$182,950 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$133,000-$182,950 USD</Salaryrange>
      <Skills>SQL, Python, Databricks, PowerBI, Data Engineering, Analytics Engineering, AI, Machine Learning, Large Language Model Tools, Claude, Semantic Data Curation, Advanced Analytics, Automation, Scalable Business Intelligence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8479036002</Applyto>
      <Location>New York; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3a2daa2d-9ff</externalid>
      <Title>Manager, Field Engineering</Title>
      <Description><![CDATA[<p>As a Manager, Field Engineering (Solutions Architects), you will build and lead a team of pre-sales Solutions Architects focusing on your assigned accounts. Your experience partnering with the sales organisation will help close revenue with the right approach whilst coaching new sales and pre-sales team members to work together.</p>
<p>You will guide and get involved to enhance your team&#39;s effectiveness; be an expert at communicating complex, business value-focused solutions; support complex sales cycles; and build relationships with key stakeholders in your customers&#39; companies.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Managing hiring, building the pre-sales team of Solutions Architects</li>
<li>Rapidly scaling the designated Field Engineering segment organisation without sacrificing quality</li>
<li>Building a collaborative culture within a rapid-growth team</li>
</ul>
<p>To embody and promote Databricks&#39; customer-obsessed, teamwork, and diverse culture</p>
<ul>
<li>Supporting increase Return on investment of SA involvement in sales cycles by 2-3x over 18 months</li>
<li>Promoting a solution and value-based selling field-engineering organisation</li>
<li>Displaying an understanding of business needs and revenue potential for accounts in the assigned region</li>
<li>Building Databricks&#39; brand in partnership with the Marketing and Sales team</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data, Cloud, SaaS, Data Architecture, Data Engineering, Database technologies, Data Science, Digital Native companies/ecosystems, AI, Cloud software models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organisations worldwide, including Comcast, Condé Nast, and Grammarly, rely on Databricks.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8316258002</Applyto>
      <Location>Melbourne, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9cd51bc1-2ce</externalid>
      <Title>Field Engineering Manager, Public Sector</Title>
      <Description><![CDATA[<p>We&#39;re looking for a hands-on Field Engineering Manager to lead a team of field engineering efforts for our federal AI projects. As a Field Engineering Manager, you will play a critical role in delivering technical solutions while helping your team grow and execute. You will split your time between technical planning and execution (50%) and people management and team development (50%).</p>
<p>Responsibilities:</p>
<ul>
<li>Recruit a high-performing engineering team.</li>
<li>Drive engineering productivity.</li>
<li>Provide guidance, mentorship, and technical leadership to a team of customer-facing engineers working on Generative AI projects.</li>
<li>Collaborate with cross-functional teams to define, design, and execute strategic roadmaps based on customer needs.</li>
<li>Work directly with customers to understand their problems and translate those into custom workflows within Scale&#39;s platform.</li>
<li>Be open to ~25% travel or relocation to a key customer geographic location.</li>
<li>Collaborate with cross-functional teams to define the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
<li>Implement end-to-end data integrations, syncing customer&#39;s data to Scale&#39;s platform and back.</li>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
<li>Lead end-to-end technical delivery of custom workflows from scoping to development to testing to implementation.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of full-time engineering experience, post-graduation.</li>
<li>2+ years of prior engineering management or equivalent experience and previously managed an engineering team.</li>
<li>Track record of success as a hybrid customer-facing engineer, forward-deployed software engineer, and ability to quickly adapt to different roles.</li>
<li>Prior experience developing with Python and JavaScript, or other modern software languages.</li>
<li>Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment.</li>
<li>Linux experience: Understanding of shell scripting, operating systems, etc.</li>
<li>Networking experience: Understanding of networking technologies, configuration (ports, protocols, etc) is a plus.</li>
<li>Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources.</li>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions.</li>
<li>Understand unique DoD and USG constraints when it comes to technology.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval.</p>
<p>You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$213,900-$267,950 USD</Salaryrange>
      <Skills>Cloud-Native Technologies, Linux, Networking, Data Engineering, Problem Solving, Python, JavaScript</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674529005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f1a5b85-116</externalid>
      <Title>Mission Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and motivated Mission Software Engineer to join our dynamic Federal Engineering team. As a part of this team, you will play a critical role in supporting Scale&#39;s government customers by scoping and developing onsite solutions.</p>
<p>Our scalable, high-performance platform is the foundation for these customer solutions, and your expertise will be instrumental in designing and implementing systems that can handle interactions with existing customer systems to help our products integrate into existing customer workflows.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work directly with customers to understand their problems and translate those into features in Scale&#39;s platform.</li>
<li>Be open to &gt;50% travel or relocation to a key customer geographic location.</li>
<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
<li>Implement end-to-end data integrations, syncing customer&#39;s data to Scale&#39;s platform and back.</li>
<li>Deploy and maintain Scale software at customer sites.</li>
<li>Develop customer requested features and work closely with them to ensure that they win customer love.</li>
<li>Build robust and reliable backend systems that can serve as standalone products, empowering customers to accelerate their own AI ambitions.</li>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
</ul>
<p>Ideal Candidate:</p>
<ul>
<li>Track record of success as a hybrid customer facing engineer, forward deployed software engineer, and ability to quickly adapt to different roles.</li>
<li>Prior experience developing with Python and JavaScript, or other modern software languages. Familiarity with Node and React is a plus.</li>
<li>Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus.</li>
<li>Linux experience: Understanding of shell scripting, operating systems, etc.</li>
<li>Networking experience: Understanding of networking technologies, configuration (ports, protocols, etc)</li>
<li>Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles.</li>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval.</p>
<p>Benefits:</p>
<ul>
<li>Comprehensive health, dental and vision coverage,</li>
<li>Retirement benefits,</li>
<li>A learning and development stipend,</li>
<li>Generous PTO,</li>
<li>Commuter stipend</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$138,000-$292,560 USD</Salaryrange>
      <Skills>Python, JavaScript, Node, React, Cloud-Native Technologies, Linux, Networking, Data Engineering, ETL, Data Modeling, Data Warehousing, Data Governance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4481921005</Applyto>
      <Location>Boston, Massachusetts ; Honolulu, HI; San Diego, CA; San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10290548-1ea</externalid>
      <Title>Solutions Architect - Public Sector (LEAPS)</Title>
      <Description><![CDATA[<p>As a Solutions Architect - Public Sector at Databricks, you will be part of the Field Engineering team responsible for leading the growth of the Databricks Unified Analytics Platform. The role involves working with customers, teammates, the product team, and post-sales teams to identify use cases for Databricks, develop architectures and solutions using our platform, and guide customers through implementation to accomplish value.</p>
<p>Key responsibilities include: Partnering with the sales team to help customers understand how Databricks can help solve their business problems Providing technical leadership for customers to evaluate and adopt Databricks Consulting on big data architecture, implementing proof of concepts for strategic customer projects, data science and machine learning projects, and validating integrations with cloud services and other 3rd party applications Building and presenting reference architectures, how-tos, and demo applications for customers Becoming an expert in, and promoting Databricks-inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars Traveling to customers in your region</p>
<p>We look for candidates with 5+ years of experience in a customer-facing pre-sales, technical architecture, or consulting role, with expertise in designing and architecting distributed data systems. Experience with public cloud providers such as AWS, Azure, or GCP, data engineering technologies (e.g., Spark, Hadoop, Kafka), and data warehousing (e.g., SQL, OLTP/OLAP/DSS) is also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Apache Spark, MLflow, Delta Lake, Python, Scala, Java, SQL, R, AWS, Azure, GCP, Data Engineering, Data Warehousing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified analytics platform for data engineering, data analytics, and data science and machine learning.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8320126002</Applyto>
      <Location>Maryland; Virginia; Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>de79b405-a32</externalid>
      <Title>Sr. Manager, Field Engineering - Financial Services</Title>
      <Description><![CDATA[<p>Job Title: Sr. Manager, Field Engineering - Financial Services</p>
<p>We are seeking a seasoned leader to join our Field Engineering team as a Sr. Manager, Field Engineering - Financial Services. As a key member of our team, you will be responsible for leading a team of Solutions Architects for the Financial Services segment of Databricks&#39; Regulated Industries Field Engineering team.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and promote a diverse team focusing on enterprise software, big data/analytics, data engineering, data science, data warehousing, and generative AI.</li>
<li>Partner with Sales (and other Field Engineering technical segments) to increase revenue and help customers achieve success through Databricks&#39; Data Intelligence Platform.</li>
<li>Scale and maintain an outstanding Field Engineering team that is efficient in its operations to help accelerate Databricks&#39; growth in the market.</li>
</ul>
<p>The Impact You Will Have:</p>
<ul>
<li>Hire, train, and grow a team of Solutions Architects for a company in high-growth mode.</li>
<li>Make your customers extremely successful with Databricks and provide outsized value to their businesses.</li>
<li>Maintain a robust hiring pipeline of highly qualified, bar-raising candidates.</li>
<li>Establish relationships across the business to make your customers and team successful.</li>
<li>Keep your team of SAs ahead of the technical curve by ensuring they maintain advanced knowledge of the Databricks technology stack.</li>
<li>Ensure that your team is continuously learning and working to provide our customers with the most comprehensive solutions for their needs.</li>
</ul>
<p>What We Look For:</p>
<ul>
<li>Proven experience building and leading technical pre-sales teams.</li>
<li>10+ years of experience in the data space with a technical product (i.e., data warehousing, big data, machine learning, or more recently with generative AI).</li>
<li>Trusted advisor to technical executives that guide strategic data infrastructure decisions.</li>
<li>Strong understanding of consumption-driven business models and the recipes for long-term growth and success.</li>
<li>Experience hiring candidates that continually raise the bar, ramping them up to be successful, and promoting into larger roles.</li>
</ul>
<p>Pay Range Transparency:</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range for this role is $192,100-$264,175 USD.</p>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$192,100-$264,175 USD</Salaryrange>
      <Skills>data engineering, data science, data warehousing, generative AI, big data/analytics, enterprise software</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8466584002</Applyto>
      <Location>Illinois; Massachusetts; New York; Virginia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a39aec14-e5b</externalid>
      <Title>Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>At Scale, we&#39;re looking for talented Software Engineers to join our Public Sector team. As a Software Engineer, you will own the development of a vertical feature or a horizontal capability, including defining requirements with stakeholders and implementation until it is accepted by the stakeholders.</p>
<p>Design and implement scalable backend systems for Federal customers using cloud-native AI infrastructure.</p>
<p>Build features for agentic systems, including multi-layered guardrails and data retrieval optimization.</p>
<p>Develop data pipelines and machine learning infrastructure to make data sources accessible by agents.</p>
<p>Collaborate with cross-functional teams to execute backend solutions for secure environments.</p>
<p>Participate in customer engagements to understand requirements and deliver technical solutions.</p>
<p>Contribute to the platform roadmap and product strategy for the Federal business.</p>
<p>We&#39;re looking for talented individuals with expertise in Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, and Adaptability and Learning Agility.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:\$180,000-\$225,000 USD</p>
<p>The base salary range for this full-time position in the locations of Hawaii, Washington DC, Texas, Colorado is:\$162,400-\$233,000 USD</p>
<p>The base salary range for this full-time position in the location of St. Louis is:\$135,200-\$194,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-\$225,000 USD</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4302243005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fdc6f0f9-900</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, distributed computing, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461168002</Applyto>
      <Location>Los Angeles, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cf82e408-47b</externalid>
      <Title>Scaling Experiments for AI-designed Medicines</Title>
      <Description><![CDATA[<p>At Inceptive, you will help pioneer the next generation of AI-designed drugs, with the potential to positively impact billions of people, as part of a collaborative, interdisciplinary team.</p>
<p>Training on natural and experimentally-derived data is a core aspect of how our AI models learn to generate therapeutic molecules with exceptional properties. We invest deeply in building and scaling data sources to train and evaluate our models for maximal performance. At the same time, careful validation in orthogonal, translationally-relevant assays is crucial to orient our models.</p>
<p>We are seeking a senior scientific leader versed in high-throughput and translational biology to drive the continued growth and impact of our Palo Alto Lab. You will be a force multiplier for our scientists and engineers, ensuring a high standard of scientific rigor and output in a fast-paced, dynamic environment. You will also interface closely with our AI team to maximize the impact of internal data on Inceptive’s foundation models.</p>
<p>Your Mission, should you choose to accept it - Lead development and scale-up of high-throughput assays across in vitro, cellular, and in vivo systems, leveraging multiplexed assays and laboratory automation - Define and execute a lab strategy aligned with Inceptive’s therapeutic and platform priorities, while remaining flexible as modality and partnership needs evolve - Champion an interdisciplinary culture, encouraging curiosity, rigor, and collaboration across scientific boundaries - Manage, mentor, and develop a multidisciplinary team of scientists and engineers that rapidly generates high-impact biological data to improve and validate AI design models - Oversee the development of industry-standard validation assays to keep models and data generation aligned with downstream therapeutic application</p>
<p>Qualifications - PhD and 6+ years of post-PhD experience in industrial research applied to drug development - Experience managing and mentoring a data-driven lab of 10+ scientists - Experience using high-throughput and/or highly multiplexed assays to generate rich datasets from mammalian cells - Proven ability to set scientific direction while also executing operationally - Deep understanding of theory, techniques, and experimental design in molecular and cellular biology - Visualization, analysis, and statistics applied to complex biological datasets - Availability to work with team members across US and Europe, with meetings starting at 7am PT - Readiness to travel several times a year for company retreats and business events - We value in-person collaboration and expect candidates to work at our lab location</p>
<p>Preferred skills - Translational experience with mRNA, oligonucleotides, or other genetic medicines - Expertise in immunology and/or cell therapy - Hands-on experience in RNA biology and biochemistry - Scientific programming in Python - Hands-on experience with modern data engineering workflows</p>
<p>Compensation $245K – $305K + Bonus + Equity</p>
<p>What we offer - A competitive compensation package - 30 days paid vacation per year - Comprehensive health insurance for US based Beginners - 401K with company match for US based Beginners and Direktversicherung for German Beginners - Quarterly company-wide retreats - Monthly wellness benefit - Budget for multiple visits per year to our offices in Berlin, Palo Alto or Switzerland - Learning &amp; Development budget to attend conferences, take courses, or otherwise invest in your professional growth, as well as access to the Learning &amp; Development platform EdX and Hone - A buddy to help you get settled *Varies by country and does not apply to internships</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$245K – $305K + Bonus + Equity</Salaryrange>
      <Skills>high-throughput and translational biology, AI model development, data science, molecular and cellular biology, experimental design, translational experience with mRNA, oligonucleotides, or other genetic medicines, expertise in immunology and/or cell therapy, hands-on experience in RNA biology and biochemistry, scientific programming in Python, hands-on experience with modern data engineering workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Biotechnology</Industry>
      <Employername>Inceptive</Employername>
      <Employerlogo>https://logos.yubhub.co/inceptive.com.png</Employerlogo>
      <Employerdescription>Inceptive is a biotechnology company developing AI-designed drugs.</Employerdescription>
      <Employerwebsite>https://inceptive.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/inceptive/jobs/5060348007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cb18189c-d78</externalid>
      <Title>Solutions Architect (Pre-sales) - Kansai Region</Title>
      <Description><![CDATA[<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud) – Kansai Region, your mission will be to drive successful technical evaluations and solution designs for some of our focus customers in the Kansai region (Osaka/Kyoto) for Databricks Japan.</p>
<p>You are passionate about data and AI, love getting hands-on with technology, and enjoy communicating its value to both technical and non-technical stakeholders. Partnering closely with Account Executives, you will lead the technical discovery, architecture design, and proof-of-concept phases, and act as a trusted advisor to our customers on their data and AI strategy.</p>
<p>You will help customers realize tangible, data-driven outcomes on the Databricks Lakehouse Platform by guiding data and AI teams to design, build, and operationalize solutions within their enterprise ecosystem.</p>
<p>Responsibilities:</p>
<ul>
<li>Be a Big Data Analytics expert on aspects of architecture and design</li>
<li>Lead your prospects through evaluating and adopting Databricks</li>
<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>
<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>
<li>Engage with the technical community by leading workshops, seminars, and meet-ups</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>
<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>
<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>
<li>Experience designing and implementing architectures within public clouds (AWS, Azure, or GCP)</li>
<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>
<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java, and R is also desirable</li>
<li>Experience working with Enterprise Accounts</li>
<li>Written and verbal fluency in Japanese</li>
</ul>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R, Public Cloud, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8437028002</Applyto>
      <Location>Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ea3861b8-d43</externalid>
      <Title>Manager, Field Engineering</Title>
      <Description><![CDATA[<p>Job Title: Manager, Field Engineering</p>
<p>We are seeking a highly experienced Manager, Field Engineering to lead a team of Solutions Architects for our Field Engineering team in Latam. As a Manager, Field Engineering, you will be responsible for leading and promoting a dynamic team focusing on enterprise software, big data/analytics, data engineering, data science, data warehousing, and generative AI.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead and promote a dynamic team of Solutions Architects</li>
<li>Partner with Sales and other Field Engineering technical segments to increase revenue and help customers become wildly successful</li>
<li>Scale and maintain an outstanding Field Engineering team that is efficient in its operations to help accelerate Databricks&#39; growth in the market</li>
</ul>
<p>The Impact You Will Have:</p>
<ul>
<li>Hire, train, and grow a team of Solutions Architects for a company in high-growth mode</li>
<li>Make your customers extremely successful with Databricks and provide outsized value to their businesses</li>
<li>Maintain a robust hiring pipeline at all times</li>
<li>Establish relationships across the business to make your customers and team successful</li>
<li>Keep your team of SAs ahead of the technical curve</li>
</ul>
<p>What We Look For:</p>
<ul>
<li>Proven experience building and leading technical pre-or post-sales teams</li>
<li>10+ years of experience in the data space with a technical product</li>
<li>Trusted advisor to technical executives that guide strategic data infrastructure decisions</li>
<li>Strong understanding of consumption-driven business models and the recipes for long-term growth and success</li>
<li>Experience hiring candidates that continually raise the bar, ramping them up to be successful, and promoting into larger roles</li>
</ul>
<p>About Databricks:</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics, and AI.</p>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data science, data warehousing, generative AI, enterprise software, big data/analytics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8360067002</Applyto>
      <Location>Sao Paulo, Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2895081b-eab</externalid>
      <Title>Sr. Specialist Solutions Architect</Title>
      <Description><![CDATA[<p>As a Sr. Specialist Solutions Architect, you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with Apache Spark and expertise in other data technologies.</p>
<p>Your responsibilities will include providing technical leadership to guide strategic customers to successful implementations on big data projects, architecting production-level data pipelines, becoming a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows, assisting Solution Architects with more advanced aspects of the technical sale, and contributing to the Databricks Community.</p>
<p>To succeed in this role, you will need to have a strong background in software engineering and data engineering, with expertise in at least one of the following areas: software engineering/data engineering, data applications engineering, or deep specialty expertise in areas such as scaling big data workloads, migrating Hadoop workloads to the public cloud, or experience with large-scale data ingestion pipelines and data migrations.</p>
<p>You will also need to have a bachelor&#39;s degree in computer science, information systems, engineering, or equivalent experience through work experience, production programming experience in SQL and Python, Scala, or Java, and 2 years of professional experience with Big Data technologies and architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Big Data technologies, Data engineering, Data lake technology, Data streaming, Data ingestion and workflows, Python, Scala, Java, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8499576002</Applyto>
      <Location>Sao Paulo, Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dd67fe82-1c8</externalid>
      <Title>Solutions Architect : Data &amp; AI</Title>
      <Description><![CDATA[<p>As a Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field.</p>
<p>You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>
<p>The impact you will have:</p>
<ul>
<li>You will be a Big Data Analytics expert on aspects of architecture and design</li>
<li>Lead your clients through evaluating and adopting Databricks including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>
<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>
<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>
<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>
</ul>
<p>Together with your Account Executive, you will form successful relationships with clients throughout your assigned territory to provide technical and business value</p>
<p>What we look for:</p>
<ul>
<li>Strong consulting / customer facing experience, working with external clients across a variety of industry markets</li>
<li>Core strength in either data engineering or data science technologies</li>
<li>8+ years of experience demonstrating technical concepts, including demos, presenting and white-boarding</li>
<li>8+ years of experience designing architectures within a public cloud (AWS, Azure or GCP)</li>
<li>6+ years of experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>
<li>Coding experience in Python, R, Java, Apache Spark or Scala</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p>Compliance</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data technologies, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, R, Java, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data intelligence platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8346277002</Applyto>
      <Location>Pune, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f421360-4eb</externalid>
      <Title>Senior Data Scientist, Growth</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Your North Star: Build innovative, data-driven products that enhance patient engagement and retention to improve the lives of moms and babies.</p>
<p>We&#39;re looking to bring on an experienced Senior Data Scientist to own the data strategy for our Growth and Patient Experience team. In this role, you will hold a dual focus: optimizing the top-of-funnel strategy to grow patient enrollment, and driving the development of next-generation products that keep those patients engaged throughout their care journey.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Become an expert in patient behavior, engagement patterns, and retention drivers, owning the team’s data science roadmap and playing a foundational role in shaping Pomelo&#39;s patient experience strategy</li>
<li>Collaborate deeply with the Marketing team to build and optimize a multi-channel enrollment strategy. You will build attribution models, analyze funnel conversion rates, and identify the best ways to allocate resources towards patient enrollment.</li>
<li>Define and iterate on key metrics that define the health of the business, such as enrollment and retention rates, and present insights to Pomelo’s leadership team</li>
<li>Complete exploratory analyses to inform development of enrollment and engagement-focused features, such as conversation topic categorization and patient journey segmentations</li>
<li>Build robust data pipelines and analytics infrastructure that connect external, marketing and product usage data, creating a unified view of the patient journey.</li>
<li>Lead the team’s approach to A/B testing across marketing and product initiatives to accurately measure impact</li>
<li>Collaborate closely with Product, Clinical, and Engineering teams to translate patient insights into actionable product improvements and new feature development</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>4+ years professional experience in data science, product analytics, or data engineering</li>
<li>1+ years professional experience working in healthcare, consumer health, or high-touch consumer products</li>
<li>Expert proficiency in both SQL and Python, with demonstrated ability to work across the full data stack</li>
<li>Strong product sense with experience in product prototyping, A/B testing, and user behavior analysis</li>
<li>Passionate about improving maternal health outcomes and creating exceptional patient experiences</li>
<li>Proven track record of scoping and implementing data-driven product solutions to complex, ambiguous problems</li>
<li>Comfortable leading cross-functional initiatives with Product, Engineering, and Clinical teams as a strategic partner and technical expert</li>
<li>A strong communicator and a keen nose for value - someone who can easily and quickly understand the needs of different audiences</li>
<li>Experience with data visualization and dashboarding tools (e.g., Looker, Tableau, Metabase)</li>
<li>Experience working with modern data stack tools (e.g., dbt, Airflow, Snowflake)</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with AI model development including fine-tuning, prompt engineering, model evaluation and building production AI applications</li>
<li>Background in healthcare AI, clinical decision support, or medical NLP</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, data science, product analytics, data engineering, healthcare, consumer health, high-touch consumer products, product prototyping, A/B testing, user behavior analysis, data visualization, dashboarding tools, modern data stack tools, AI model development, fine-tuning, prompt engineering, model evaluation, production AI applications, healthcare AI, clinical decision support, medical NLP</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Pomelo Care</Employername>
      <Employerlogo>https://logos.yubhub.co/pomelocare.com.png</Employerlogo>
      <Employerdescription>Pomelo Care is a virtual medical practice providing continuous support to women and children throughout various stages of life.</Employerdescription>
      <Employerwebsite>https://www.pomelocare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pomelocare/jobs/5970852004</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af586166-0a0</externalid>
      <Title>Technical Solutions Specialist, Data Operations</Title>
      <Description><![CDATA[<p>In Data Operations on the Strategic Data Partnerships team at Anthropic, you will support a cross-functional team in implementing partnership strategies to improve Anthropic’s products. You’ll ensure data meets our standards and reaches the right teams, build systems to track compliance and data usage across the portfolio, and coordinate across Research, Product, Legal, and external partners to remove barriers and accelerate impact.</p>
<p>This role requires operational excellence combined with technical hands-on execution, and is a great fit for someone who wants to apply those skills in a high-impact, fast-growth context.</p>
<p>Responsibilities:</p>
<p>Data Opportunity Assessment and Processing</p>
<ul>
<li>Analyze and review incoming or prospective data to verify it is useful and strategic for Anthropic</li>
<li>Own and maintain Python-based ETL pipelines that process large partner datasets, applying filtering criteria and deduplicating against existing data</li>
<li>Write and optimize SQL queries against large relational databases to support filtering and analysis workflows</li>
<li>Refine processing logic as requirements evolve across new data types and formats</li>
</ul>
<p>Data Delivery Infrastructure, Tooling, and Support</p>
<ul>
<li>Own end-to-end data delivery workflows, ensuring data moves seamlessly from partners to internal teams to accelerate time-to-impact</li>
<li>Manage AWS and GCP resources for receiving and organizing partner data deliveries</li>
<li>Troubleshoot delivery issues and coordinate with partners on formatting and transfer protocols and resolve technical escalations from partners and internal teams</li>
<li>Build and maintain internal systems, scripts, and automation that support the team’s workflows</li>
<li>Support occasional research evaluation tasks as needed</li>
</ul>
<p>Data Operations and Governance</p>
<ul>
<li>Develop and maintain Anthropic&#39;s preferred standards for receiving, consuming and cataloging data, ensuring alignment with Product and Engineering&#39;s evolving needs</li>
<li>Contribute to systems for monitoring data usage and compliance with partner agreements</li>
<li>Partner with teammates and cross-functional stakeholders to build out governance practices as the team scales</li>
</ul>
<p>You May Be a Good Fit If You</p>
<ul>
<li>Bachelor’s degree in Engineering, Computer Science, a related field, or equivalent practical experience</li>
<li>5-7+ years of experience with data pipelines or data engineering workflows</li>
<li>Background in solutions engineering, partner engineering or related role at a large tech company</li>
<li>5+ years of experience in technical troubleshooting or writing code in one or more programming languages</li>
<li>Proficiency in Python and SQL, including writing, debugging, and optimizing scripts and queries against large datasets</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), including managing storage, configuring access, and working from the CLI</li>
<li>Excellent problem-solving skills with a track record of debugging technical issues, whether at the code level or within a broader system</li>
<li>Some experience interacting with external third parties delivering data</li>
</ul>
<p>Strong Candidates Will Have</p>
<ul>
<li>Experience working alongside technical teams (research, engineering, or product) to solve ambiguous problems</li>
<li>Ability to translate technical concepts into clear, actionable guidance for non-technical stakeholders or external partners</li>
<li>Experience owning or maintaining a production service or system with uptime expectations</li>
<li>Familiarity with data governance, compliance, or rights management</li>
<li>Ability to manage multiple, time-sensitive projects simultaneously and the drive to take a project from an initial idea to full completion</li>
<li>Experience leveraging AI to automate workflows</li>
</ul>
<p>Candidates Need Not Have</p>
<ul>
<li>Deep expertise in AI or machine learning</li>
<li>A pure software engineering background</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$240,000 USD</Salaryrange>
      <Skills>Python, SQL, Cloud infrastructure (AWS, GCP, or Azure), Data pipelines, Data engineering workflows, Solutions engineering, Partner engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. It employs a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5056499008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bfddfcc3-e38</externalid>
      <Title>Senior Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will lead the development of a vertical feature or a horizontal capability to include defining requirements with stakeholders and implementation until it is accepted by the stakeholders.</p>
<p>You will:</p>
<p>Lead the design and implementation of scalable backend systems and distributed architectures for Federal customers. Manage the full lifecycle of feature development from requirement definition to deployment on classified networks. Direct the orchestration of asynchronous agent fleets to meet mission requirements. Lead customer engagements to translate mission needs into technical requirements. Own the communication with stakeholders to ensure implementation meets defined acceptance criteria. Conduct technical reviews and identify risks within machine learning infrastructure and model serving. Drive the platform roadmap by providing technical specifications for Federal product offerings.</p>
<p>Ideally you will have:</p>
<p>Full Stack Development: Proficiency in front-end, back-end development and infrastructure, including experience with modern web development frameworks, programming languages, and databases Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles AI Application Integration: Familiarity with integrating Large Language Models (LLMs) and building agentic workflows. Understanding of prompt engineering, retrieval-augmented generation (RAG), and agent orchestration is beneficial. Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to defining and evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$311,000 USD (San Francisco, New York, Seattle) $194,400-$279,000 USD (Hawaii, Washington DC, Texas, Colorado) $162,400-$233,000 USD (St. Louis)</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Docker, Kubernetes, AWS, Azure, GCP, ETL, data modeling, data warehousing, data governance, Large Language Models, prompt engineering, retrieval-augmented generation, agent orchestration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674911005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>03224784-9c2</externalid>
      <Title>Senior Data Engineering Manager</Title>
      <Description><![CDATA[<p>Job Title: Senior Data Engineering Manager</p>
<p>Location: Dublin, Ireland</p>
<p>Department: R&amp;D</p>
<p>Job Description:</p>
<p>Intercom is seeking a Senior Data Engineering Manager to lead the design and evolution of the core infrastructure that powers our entire data ecosystem. As a leader, you will partner with product and business teams to drive key data initiatives and ensure the success of our data engineering team.</p>
<p>Responsibilities:</p>
<ul>
<li>Next-Gen Platform Evolution: Partner with product and business teams to design and implement the next generation of our data stack, ensuring it can meet the demands of advanced analytics and AI applications.</li>
</ul>
<ul>
<li>Enablement Through Tooling: Partner closely with Analytics Engineers, Analysts, and Data Scientists to build self-service tooling and infrastructure that enables them to move fast and deploy safely.</li>
</ul>
<ul>
<li>Data Quality Guardianship: Implement advanced monitoring systems to proactively detect, surface, and resolve data quality issues across our high-throughput environment.</li>
</ul>
<ul>
<li>Driving Automation: Develop automation and tooling that streamlines the creation and discovery of high-quality analytics data, making the entire data lifecycle more efficient.</li>
</ul>
<p>Strategic Impact You&#39;ll Drive:</p>
<ul>
<li>GTM Data Platform Strategy: Build the data acquisition strategy that will enable us to build the next generation of business-focused internal software.</li>
</ul>
<ul>
<li>Conversational BI Strategy: Lead the charge to shift away from complex, technical reporting toward natural language interaction to make data truly democratized and accessible.</li>
</ul>
<ul>
<li>Platform &amp; Warehousing Strategy: Lead the architectural- and cost review and revamp of our core data infrastructure to ensure it can scale exponentially for future growth and advanced use cases.</li>
</ul>
<p>Recent Wins You&#39;ll Build Upon:</p>
<ul>
<li>AI-assisted Local Analytics Development Environment for Airflow and DBT.</li>
</ul>
<ul>
<li>Data-rich AI apps containerized on Snowflake SPCS.</li>
</ul>
<ul>
<li>A new, modern data catalog solution.</li>
</ul>
<ul>
<li>Migrating critical MySQL ingestion pipelines from Aurora to PlanetScale.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>A leader, a builder, and a problem-solver who thrives on solving real-world business problems.</li>
</ul>
<ul>
<li>7+ years of experience in the data space, leading teams of 6+ engineers.</li>
</ul>
<ul>
<li>Stakeholder focus: ability to communicate complex technical solutions to a business-focused audience and vice versa.</li>
</ul>
<ul>
<li>Technical depth: not afraid to get hands dirty and write code when needed.</li>
</ul>
<ul>
<li>A leader and mentor: naturally recognizes opportunities to step back and mentor others.</li>
</ul>
<p>Bonus Points (Our Modern Stack Knowledge):</p>
<ul>
<li>Airflow at scale: extensive experience working with Apache Airflow, especially the nuances of operating it reliably in a high-volume environment.</li>
</ul>
<ul>
<li>Modern data stack fluency: familiarity with tools like Snowflake and DBT.</li>
</ul>
<ul>
<li>Future-focused: keeps a keen eye on industry trends and emerging technologies.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and equity in a fast-growing start-up.</li>
</ul>
<ul>
<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen.</li>
</ul>
<ul>
<li>Regular compensation reviews - we reward great work!</li>
</ul>
<ul>
<li>Pension scheme &amp; match up to 4%.</li>
</ul>
<ul>
<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents.</li>
</ul>
<ul>
<li>Open vacation policy and flexible holidays so you can take time off when you need it.</li>
</ul>
<ul>
<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones.</li>
</ul>
<ul>
<li>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too.</li>
</ul>
<ul>
<li>MacBooks are our standard, but we also offer Windows for certain roles when needed.</li>
</ul>
<p>Policies:</p>
<ul>
<li>Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home.</li>
</ul>
<ul>
<li>We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Apache Airflow, DBT, Snowflake, Data Engineering, Data Science, Analytics, Data Management, Data Quality, Automation, Cloud Computing, Data Warehousing, Big Data, Machine Learning, Artificial Intelligence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that provides customer experiences for businesses. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7574762</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3a936a2d-e4a</externalid>
      <Title>Manager, Big Data Architecture – Professional Services (DWH, Data Engineering &amp; Migrations)</Title>
      <Description><![CDATA[<p>As a Manager of Resident Solutions Architects at Databricks, you will provide strategic leadership for delivering professional services engagements to high-value Databricks customers. You will help shape the future big data and machine learning landscape for leading Fortune 500 organisations.</p>
<p>Part of this role will include a people-leadership capacity, responsible for core aspects of building and managing the Resident Solutions Architect team. Through your oversight and mentorship, this team will guide our largest customers, implementing pipelines spanning data engineering through model building and deployment, plus other technical tasks to help customers get value out of their data with Databricks.</p>
<p>Beyond people leadership, your responsibilities will include owning the delivery of customer projects in your region to ensure they are managed and delivered to target and exacting standards. You will be an ambassador for Services and their value in the region, will represent the organisation in steering committees, and will work with cross-functional teams and leaders to ensure Services support the development of the local business.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Achieving regional team targets for billable utilisation, hiring and revenue</li>
<li>Partnering with account executives, customer success and field engineering leaders while guiding Resident Solutions Architects to achieve success with professional services projects with customers</li>
<li>Helping resolve customer concerns on strategic accounts and professional services engagements</li>
<li>Analysing operational processes and escalation procedures and performing training needs assessments to identify opportunities for improving service delivery and contributing to customers</li>
<li>Managing a team of Resident Solution Architects and acting as a supportive manager, including handling escalations, mentoring team members, and building a career path for the assigned team members</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Proven leadership experience in managing and guiding consulting, delivery, or solution architecture teams, ensuring successful project execution and team development</li>
<li>Strong technical background as a hands-on Solutions Architect, enabling you to effectively support and mentor technical architects under your leadership while driving strategic initiatives</li>
<li>Experience driving software platform adoption in Fortune 500 organisations in markets such as finance, media, retail, telco, energy, and healthcare</li>
<li>Implementing a project schedule with experience with customer engagement</li>
<li>Experience with Databricks products, Spark ecosystem, and direct competitors</li>
<li>Travel is required up to 10%, more at peak times</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Leadership, Strategic planning, Project management, Data engineering, Machine learning, Spark ecosystem, Databricks products, Cloud computing, DevOps, Agile methodologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8439078002</Applyto>
      <Location>Paris, France</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>77050838-92f</externalid>
      <Title>Senior Director, Data Analytics</Title>
      <Description><![CDATA[<p>As the Senior Director, Data Analytics, you&#39;ll be the strategic analytics leader for Marketing and Product. You&#39;ll lead a newly combined organisation that brings together data-driven insights across the customer lifecycle, from acquisition through adoption and expansion.</p>
<p>Reporting to the Vice President, Enterprise Data, you&#39;ll partner closely with senior leaders to improve how teams make decisions, measure performance, and drive outcomes, with a focus on shared views of usage and consumption models and major product launches.</p>
<p>You&#39;ll oversee two critical functions: Marketing Analytics (including demand generation, lifecycle marketing, brand, web, developer relations, localization, monetization, and campaign and event effectiveness) and Product Data Insights (including DevOps, Security, Platforms, AI products, new usage and consumption models, product adoption, feature usage, and customer behaviour analysis).</p>
<p>Key responsibilities include:</p>
<p>Defining and executing a unified analytics strategy across Marketing and Product, including shared metrics, measurement frameworks, and dashboards that serve as a single source of truth and connect marketing investment to product adoption, usage, and customer outcomes.</p>
<p>Building clear operating rhythms and ways of working that close historical gaps and help Marketing and Product make consistent, data-informed decisions.</p>
<p>Partnering with Marketing and Product leadership, including the Chief Product &amp; Marketing Officer and other senior stakeholders, to provide actionable insights and executive-ready recommendations that shape go-to-market plans, product launches, roadmap prioritisation, and user experience improvements.</p>
<p>Solving complex analytics problems across both functions, including attribution modelling, lead scoring optimisation, campaign and event effectiveness, product adoption and feature usage analysis, and analytics for AI-powered features (including instrumentation, usage, and cost drivers).</p>
<p>Building and maintaining forecasting and scenario modelling frameworks in partnership with Finance, Product, and go-to-market leaders, tying pipeline, recurring revenue, and usage or consumption models to planning and investment decisions.</p>
<p>Establishing and scaling an experimentation programme across Marketing and Product, setting standards for hypothesis design, test methodology, instrumentation requirements, and clear readouts that translate results into decisions.</p>
<p>Building strong partnerships with data engineering, engineering, and legal, privacy, and security teams to translate business questions into technical requirements, prioritise telemetry and data model work, and improve reliability, quality, accessibility, and compliance in the analytics stack.</p>
<p>Hiring, mentoring, and developing leaders and team members, raising the bar for strategic thinking, stakeholder partnership, and end-to-end ownership across the analytics organisation.</p>
<p>Requirements include:</p>
<p>Strategic analytics leadership across both marketing analytics and product analytics, ideally in B2B SaaS or other high-growth technology environments.</p>
<p>Experience building, leading, and developing multi-layer analytics teams (including hiring, managing managers, and coaching leaders).</p>
<p>Ability to define and operationalise end-to-end measurement frameworks across Marketing and Product, including shared KPIs and clear metric definitions.</p>
<p>Strong analytical and technical skills, including SQL; statistical analysis and experimentation (A/B and multivariate testing); forecasting and scenario modelling; and advanced analytics techniques.</p>
<p>Experience with data visualisation and BI tools (Tableau or similar), with a track record of building executive-ready reporting and narratives for senior leaders.</p>
<p>Proven partnership with Engineering and Data Engineering to translate business needs into telemetry, data models, and analytics requirements, and to improve reliability and delivery across the analytics stack.</p>
<p>Experience collaborating with Legal, Privacy, and Security partners to design compliant telemetry and data-collection approaches that respect regulations and customer expectations.</p>
<p>Ability to influence senior leaders through clear communication and actionable recommendations, and to work effectively in a fully remote, asynchronous environment while adopting GitLab&#39;s values and ways of working.</p>
<p>The base salary range for this role&#39;s listed level is currently $184,400-$314,400 USD for residents of the United States only.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$184,400-$314,400 USD</Salaryrange>
      <Skills>strategic analytics leadership, marketing analytics, product analytics, SQL, statistical analysis, experimentation, forecasting, scenario modelling, data visualisation, BI tools, Tableau, executive-ready reporting, narratives, engineering, data engineering, legal, privacy, security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8436589002</Applyto>
      <Location>Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2e0299ca-670</externalid>
      <Title>Sr. Manager, Partner Solutions Architect</Title>
      <Description><![CDATA[<p>As a Sr. Manager, Partner Solutions Architect, you will lead a team of Partner Solution Architects to help mature our consulting &amp; SI partner development function. You will provide technical guidance for practices at our consulting partners, spanning pre-sales teams and their data analytics delivery organizations. Your experience and influence will help accelerate Customer Success and Databricks DBU consumption through a global partner ecosystem.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Growing a team of Partner Solutions Architects for a company in high-growth mode</li>
<li>Ensuring partners have a comprehensive technical point of view to establish the lakehouse architecture across a wide array of their customers</li>
<li>Identifying valuable business-impacting offers and guiding the adoption of the Databricks platform into partner solutions</li>
<li>Establishing partners as trusted advisors by having strong capability and Databricks practices</li>
<li>Building a collaborative culture within a hyper-growth team to embody and ensure Databricks&#39; company values</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>8+ years experience in senior pre-sales or post-sales roles, where you have assisted in the development of a successful team of professionals in Big Data, Cloud, or SaaS</li>
<li>Player/Coach capability, comfortable both leading or contributing to work on projects, with partners on our joint customers</li>
<li>Knowledgeable in and passionate about data-driven decisions, AI, and Cloud software models</li>
<li>Great at instituting processes for technical field members to promote efficiency</li>
<li>Bachelor&#39;s Degree in STEM subject, or MSc desirable</li>
<li>Technical background either in Data Engineering / Database technologies or Data Science</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>pre-sales, post-sales, Big Data, Cloud, SaaS, Data Engineering, Database technologies, Data Science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8073294002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>53024247-9d6</externalid>
      <Title>Senior Solutions Architect - Lakewatch</Title>
      <Description><![CDATA[<p>We are seeking a Senior Solutions Architect to join our Lakewatch team in London. As a Senior Solutions Architect, you will provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment.</p>
<p>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory, driving Lakewatch adoption from initial data offload through full SIEM augmentation or replacement.</p>
<p>As a trusted advisor, serve as an expert Solutions Architect building technical credibility with CISOs, security architects, SOC leadership, and security analysts to drive product adoption and vision.</p>
<p>Enable clients at scale through workshops, POC execution, and developing customer-facing collateral that increases technical knowledge and demonstrates the value of an open agentic SIEM architecture.</p>
<p>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams.</p>
<p>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical security environments.</p>
<p>Establish and refine the sales qualification and POC intake process, ensuring well-scoped engagements that maximize customer success and minimize friction for R&amp;D.</p>
<p>The ideal candidate will have 5+ years of experience in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level security strategy and product adoption.</p>
<p>Experience with design and implementation of data and AI applications in cybersecurity, including anomaly detection, behavioral analytics, and agentic AI workflows for triage and investigation.</p>
<p>Proficient in programming, debugging, and problem-solving using SQL and Python and with AI tools.</p>
<p>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes in cybersecurity.</p>
<p>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP), with an understanding of cloud-native security logging and monitoring.</p>
<p>Deep experience in security operations, with broad familiarity across one or more of the following: data engineering, data warehousing, AI/ML for security, data governance, and streaming.</p>
<p>Undergraduate degree (or higher) in a technical field such as Computer Science, Cybersecurity, Applied Mathematics, Engineering or similar.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cybersecurity engineering, security operations, security architecture, design and implementation of data and AI applications, anomaly detection, behavioral analytics, agentic AI workflows, SQL, Python, AI tools, cloud-native security logging and monitoring, data engineering, data warehousing, AI/ML for security, data governance, streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that unifies and democratizes data, analytics, and AI for over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8493140002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>62bc6bad-261</externalid>
      <Title>Lead Solutions Architect (Pre-sales) – Public Sector</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re seeking a Lead Solutions Architect to join our Japan Public Sector team. As a senior technical leader, you will be responsible for shaping and executing the data and AI strategy for key public sector customers in Japan.</p>
<p>Your impact will be significant, as you will own the technical strategy for these customers, defining the target architecture and roadmap for their Databricks-based data and AI platform. You will also lead solution design for priority government use cases, such as citizen 360 and eligibility, program analytics, smart infrastructure, cybersecurity, and fraud/waste/abuse.</p>
<p>In addition, you will design for security and compliance, ensuring architectures meet public sector expectations on data residency, access control, governance, and auditability. You will turn pilots into platforms, linking Databricks to visible mission outcomes and using those wins to drive expansion across bureaus, departments, and levels of government.</p>
<p>As a mentor, you will coach Solutions Architects and partners on how to work effectively with Japan public sector stakeholders and patterns on the Databricks platform.</p>
<p>We&#39;re looking for someone with 10+ years of experience in data platforms, analytics, or cloud architecture, including significant experience with public sector organizations in Japan. You should have a proven ability to build trust with senior government stakeholders and lead high-stakes technical discussions in Japanese.</p>
<p>You should also have a strong technical background in Data Engineering, Data Warehousing/Analytics, AI/ML, or Cloud Architecture, with a focus on solution design and platforms. Experience architecting data and analytics solutions for public sector use cases is essential.</p>
<p>If you&#39;re excited about helping Japan&#39;s Public Sector use data and AI to deliver better outcomes – while setting the technical bar for our GTM in Japan – we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Engineering, Data Warehousing/Analytics, AI/ML, Cloud Architecture, Solution Design, Platforms, Public Sector Experience, Japanese Language, Business-Level English</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data and analytics.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8437076002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>879783fa-e08</externalid>
      <Title>Sr. Product Manager, Data Engineering</Title>
      <Description><![CDATA[<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Data Engineering is foundational and among the largest scale workloads on the Databrick Data Intelligence Platform. We are reinventing Data Engineering with Lakeflow - a unified product and experience for simple data ingestion, declarative data transformation, and real-time streaming.</p>
<p>In this role, you will lead product management for a core Lakeflow product area. You will own and drive all aspects of product management including vision, strategy, roadmap, execution, and go-to-marketing. In addition, you will partner closely with various Databricks product teams to enable Data Engineering for the overall Databrick product portfolio including data science, data warehousing, business intelligence, and machine learning products.</p>
<p>The impact you will have includes leading product management for one of the fastest growing products and businesses at Databricks, making company-wide impact by driving Data Engineering across the Databrick product portfolio, developing and deepening understanding of and expertise in Data Engineering, defining, shaping, and driving the future of data processing, data applications, and data pipelines, and owning the full life cycle of product development from ideation to requirements, development, pricing, launch, and go-to-market.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$115,400-$204,200 USD</Salaryrange>
      <Skills>product management, data engineering, Lakeflow, data ingestion, declarative data transformation, real-time streaming, product vision, product strategy, roadmap development, execution, go-to-marketing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6322654002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2f962d3f-14e</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461218002</Applyto>
      <Location>Dallas, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5ceb4835-0f1</externalid>
      <Title>Manager, Professional Services</Title>
      <Description><![CDATA[<p>As a Manager, Professional Services, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers get the most value out of their data.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical big data projects which may include building reference architectures, how-to&#39;s, and production-grade MVPs.</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build, and deployment of industry-leading big data and AI applications.</li>
<li>Consult on architecture and design; bootstrap or implement strategic customer projects which lead to a customer&#39;s successful understanding, evaluation, and adoption of Databricks.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement-specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role.</li>
<li>4+ years of people management experience, managing a team of Data Engineers, Data Architects, etc.</li>
<li>6+ years of experience working on Big Data Architectures independently.</li>
<li>Experience working across Cloud Platforms (GCP/AWS/Azure).</li>
<li>Experience working on Databricks platform is a plus.</li>
<li>Documentation and white-boarding skills.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Willingness to travel for onsite customer engagements within India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, Kafka, Cloud Native, Data Lakes, Big Data Technologies, Data Engineering, Data Science, Cloud Technology, People Management, Team Leadership, Databricks, GCP, AWS, Azure, Documentation, White-boarding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8503068002</Applyto>
      <Location>Remote - India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0036f074-845</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456966002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d1eb4b56-04e</externalid>
      <Title>Tech-Operations</Title>
      <Description><![CDATA[<p>We are seeking a Tech-Operations professional to join our team. As a Tech-Operations professional, you will collaborate across the company to calculate Stripe fees for customers, introduce data and process efficiencies, and monitor relevant metrics to drive process improvements.</p>
<p>You will work closely with engineering, product, accounting, finance, sales, and operations teams to develop sophisticated financial models to price complex deals, forecast revenue and profitability, and evaluate different business scenarios.</p>
<p>You will also be responsible for introducing and managing reconciliation processes between disparate datasets, triaging misbilling requests, auditing enterprise contracts, and automating manual billing processes.</p>
<p>You will leverage data analysis tools and techniques, including SQL, to extract and analyze data, and use data visualization dashboards to reconcile billing data with negotiated contract terms.</p>
<p>You will work independently and collaboratively with technical and non-technical teams across stakeholders to identify and triage technical issues, drive solutions with clear metrics to show impact, and establish priorities and reliably execute against time-sensitive deadlines.</p>
<p>You will have the opportunity to put the global economy within everyone&#39;s reach while doing the most important work of your career.</p>
<p>Minimum requirements:</p>
<ul>
<li>Bachelor&#39;s degree or foreign equivalent in Computer Science, Computer Engineering, Mathematics, Information Systems, Statistics, or a related field</li>
</ul>
<ul>
<li>4 years of experience in Technical Project, Product Management, or Data Analysis or Engineering</li>
</ul>
<ul>
<li>4 years of experience in building technical solutions using SQL</li>
</ul>
<ul>
<li>4 years of experience in data engineering</li>
</ul>
<ul>
<li>4 years of experience using analytic software (e.g. Excel, Gsheets, Tableau) to diagnose and solve problems</li>
</ul>
<ul>
<li>4 years of experience in learning and internalizing complex concepts and subjects quickly, and articulating them to others clearly</li>
</ul>
<ul>
<li>4 years of experience in navigating the nuanced complexity of financial systems and solving large-scale, technical challenges</li>
</ul>
<ul>
<li>4 years of experience in investigating, prioritizing, and identifying the root cause of non-trivial issues</li>
</ul>
<ul>
<li>4 years of experience in managing communications and workflows with technical and non-technical teams across stakeholders</li>
</ul>
<ul>
<li>4 years of experience in operating independently and working in a fast-paced, results-focused environment, establishing priorities and reliably executing against time-sensitive deadlines</li>
</ul>
<ul>
<li>4 years of experience in identifying and triaging technical issues and driving solutions with clear metrics to show impact</li>
</ul>
<p>Pay and benefits:</p>
<ul>
<li>Salary: $124,600 – $186,800/yr.</li>
</ul>
<ul>
<li>40 hours/week</li>
</ul>
<ul>
<li>Up to 50% telework permitted</li>
</ul>
<ul>
<li>Multiple positions available</li>
</ul>
<ul>
<li>Additional benefits for this role may include: equity, company bonus or sales commissions/bonuses; 401(k) plan; medical, dental, and vision benefits; and wellness stipends</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$124,600 – $186,800/yr.</Salaryrange>
      <Skills>SQL, data engineering, analytic software, financial systems, problem-solving, communication, project management, time management</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe, LLC.</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businessesCliente. It provides payment and revenue growth solutions to millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7812235</Applyto>
      <Location>Chicago, IL</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>84875ccd-0a5</externalid>
      <Title>Senior Partner Marketing Manager</Title>
      <Description><![CDATA[<p>As a Senior Partner Marketing Manager, you&#39;ll play a critical role in shaping and executing co-marketing initiatives with some of our most important technology partners globally.</p>
<p>Your work will amplify the reach and impact of dbt across the modern data stack, helping to elevate our brand and drive growth through the ecosystem.</p>
<p>This is a highly cross-functional role where strategic thinking, creativity, and strong collaboration skills will be key to success.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead the creation and execution of marketing campaigns, programs, events, and activities with strategic technology partners.</li>
<li>Collaborate closely with the Revenue Marketing, Partnerships, and Product Marketing teams to ensure GTM partner plans align with dbt Labs&#39; broader business goals.</li>
<li>Build and nurture relationships with marketing counterparts at key partners like Snowflake, Google, AWS, Microsoft, and Databricks to align on co-marketing efforts and shared objectives.</li>
<li>Clearly articulate the value of dbt to partners and support them in promoting the platform internally and to their customer base.</li>
<li>Own the development of joint messaging and co-branded assets,including blogs, webinars, solution briefs, and presentation decks,ensuring alignment and consistency across all public-facing content.</li>
<li>Create internal enablement materials to educate and empower sales teams to leverage partner campaigns and initiatives.</li>
<li>Gain a deep understanding of partner business strategies and priorities; design co-marketing programs that provide mutual value.</li>
<li>Set and manage OKRs, track program performance, and deliver quarterly reviews with partners to assess impact, identify opportunities, and ensure strategic alignment.</li>
<li>Develop annual GTM marketing plans tailored to individual partners, accounting for geographic and vertical-specific nuances.</li>
<li>Exercise strategic judgment in deciding which partner activities to pursue and how best to allocate time and resources for maximum impact.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>8+ years of experience in B2B marketing or similar, particularly within the data or software industry, with a strong track record of building and executing successful partner marketing programs.</li>
<li>A &#39;builder&#39; mindset: you enjoy solving problems, creating structure where none exists, and working cross-functionally to drive measurable outcomes.</li>
<li>Deep familiarity with the modern data ecosystem and players such as Snowflake, Google, AWS, Microsoft, and Databricks.</li>
<li>The ability to navigate complex partner organizations and manage relationships with multiple stakeholders across competing interests.</li>
<li>Strong storytelling and positioning skills,you know how to distill joint value propositions into compelling messaging and content.</li>
<li>Comfort operating in a fast-paced, dynamic environment with high levels of ambiguity.</li>
<li>A broad understanding of integrated marketing strategies, including digital campaigns, field marketing, and industry events.</li>
<li>Exceptional communication skills, including concise writing and confident presentation abilities, especially with senior stakeholders.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience working asynchronously within a remote, distributed team.</li>
<li>Prior experience working for or closely with any of dbt Labs&#39; strategic partners.</li>
<li>Familiarity with the role dbt Labs plays in the cloud data warehouse ecosystem and the modern data stack.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
<li>401k plan with 3% guaranteed company contribution</li>
<li>Comprehensive healthcare coverage</li>
<li>Generous paid parental leave</li>
<li>Flexible stipends for:</li>
<li>Health &amp; Wellness</li>
<li>Home Office Setup</li>
<li>Cell Phone &amp; Internet</li>
<li>Learning &amp; Development</li>
<li>Office Space</li>
</ul>
<p><strong>Compensation</strong></p>
<p>We offer competitive compensation packages commensurate with experience, including salary, RSUs, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs total rewards during your interview process.</p>
<p>In select locations (including Austin, Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is: $132,000-$188,700</li>
<li>The typical starting salary range for this role in the select locations listed is: $147,000-209,000</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$132,000-$188,700</Salaryrange>
      <Skills>B2B marketing, Partner marketing, Digital campaigns, Field marketing, Industry events, Cloud data warehouse ecosystem, Modern data stack, Data engineering, Analytics engineering, Snowflake, Google, AWS, Microsoft, Databricks</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673163005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21b40571-b50</externalid>
      <Title>Account Executive, Commercial</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. Since 2016, we’ve grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases. As of February 2025, we’ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</p>
<p>As an Account Executive, you will be responsible for managing our commercial customer base. The ideal person will be a proactive and curious member of our growing Sales team, identifying new business with prospects and growth opportunities for clients. A certain level of foresight and knowledge working with intrinsic sales cycles will take this individual confidently into the future of dbt Labs.</p>
<p>In this role, you can expect to:</p>
<ul>
<li>Build, manage and close your own pipeline of companies that you believe will benefit from the dbt Cloud offering</li>
<li>Manage, and deepen the dbt Cloud footprint in existing accounts, optimizing our impact on these companies</li>
<li>Engage with technology partners and ecosystem service providers to optimize our impact and reach in the region</li>
<li>Lead and contribute to team projects that develop our sales process</li>
<li>Work with product to build and maintain the dbt Cloud enterprise roadmap</li>
</ul>
<p>We’re looking for someone who has:</p>
<ul>
<li>2+ years closing experience in technology sales, with a proven track record of exceeding annual targets</li>
<li>Ability to understand complex technical concepts and develop them into a consultative sale</li>
<li>Excellent verbal, written, and in-person communication skills to engage stakeholders at all levels of an analytics organization (individual developer up to CTO)</li>
<li>The diligence and organizational skills to work long, intricate sales cycles involving multiple client teams</li>
<li>Ability to operate in an ambiguous and fast-paced work environment</li>
<li>A passion for being an inclusive teammate and involved member of the community</li>
<li>Experience with SQL or willingness to learn</li>
</ul>
<p>You have an edge if you have:</p>
<ul>
<li>Prior experience in analytics, ETL, BI, and/or open-sourced software</li>
<li>Knowledge of or prior experience with dbt</li>
<li>Prior experience of selling into the Nordics</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, RSUs, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab’s total rewards during your interview process.</p>
<p>The typical starting salary range for this role is: €124,000 EUR to €150,000 EUR with growth into the €170,000&#39;s</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>€124,000 EUR to €150,000 EUR with growth into the €170,000&apos;s</Salaryrange>
      <Skills>technology sales, complex technical concepts, consultative sale, SQL, ETL, BI, open-sourced software, analytics, data engineering, sales cycle management</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4663371005</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>94deaf2c-6d8</externalid>
      <Title>Manager, Field Engineering - Strategic Accounts Germany</Title>
      <Description><![CDATA[<p>Job Title: Manager, Field Engineering - Strategic Accounts Germany</p>
<p>We are seeking a seasoned professional to lead our Field Engineering team in Germany, focusing on strategic manufacturing and energy customers. As a player-coach, you will balance hands-on involvement in critical customer engagements with building repeatable motions, artefacts, and coaching mechanisms that scale impact across the team.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build, lead, and scale the pre-sales (Solutions Architect) team for Strategic Manufacturing and Energy customers in Germany, including hiring, coaching, performance management, and career development, and owning coverage for complex industrial stakeholder environments.</li>
</ul>
<ul>
<li>Own pre-sales ROI and effectiveness, including coverage models, prioritisation, and impact on win-rate, sales cycles, and customer adoption.</li>
</ul>
<ul>
<li>Act as a player-coach in strategic opportunities by balancing hands-on involvement in the most critical customer engagements with building repeatable motions, artefacts, and coaching mechanisms that scale impact across the team.</li>
</ul>
<ul>
<li>Establish and reinforce a solution- and value-based selling culture, focused on business outcomes, not just technical excellence.</li>
</ul>
<ul>
<li>Partner closely with Sales leadership to support forecast accuracy, pipeline quality, and consumption growth in a usage-driven business model.</li>
</ul>
<ul>
<li>Engage with C-level and senior executives at strategic customers to build trusted relationships and long-term partnerships.</li>
</ul>
<ul>
<li>Drive a scalable adoption model by anchoring partner and PS attach and establishing a crisp pre-/post-sales handshake tied to consumption outcomes (onboarding readiness, adoption milestones, expansion path).</li>
</ul>
<ul>
<li>Represent Field Engineering as part of the regional leadership team, contributing to Databricks&#39; brand, market positioning, and growth in Germany.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Proven experience building and managing pre-sales or technical field teams in a high-growth environment.</li>
</ul>
<ul>
<li>Strong background in data engineering, analytics platforms, databases, or data science, gained through engineering, consulting, or technical leadership roles.</li>
</ul>
<ul>
<li>Demonstrated success in consumption-based or SaaS business models, including forecasting, expansion, and customer adoption.</li>
</ul>
<ul>
<li>Ability to operate as a player-coach, balancing hands-on deal involvement with scalable team leadership.</li>
</ul>
<ul>
<li>Track record of partnering effectively with Sales and cross-functional leaders to drive measurable business outcomes.</li>
</ul>
<ul>
<li>Experience establishing repeatable processes that improve efficiency without limiting innovation or ownership.</li>
</ul>
<ul>
<li>Passion for the data and AI market and the ability to articulate a clear, executive-level POV on the value of Databricks.</li>
</ul>
<ul>
<li>Executive presence with the ability to influence and communicate confidently at C-level.</li>
</ul>
<ul>
<li>Fluent German (highly preferred) and fluent English (required).</li>
</ul>
<p>About Databricks</p>
<p>Databricks is a data and AI company providing a unified and democratized data, analytics, and AI platform. It has over 10,000 organisations worldwide relying on its services.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, analytics platforms, databases, data science, pre-sales, technical field teams, consumption-based business models, forecasting, expansion, customer adoption, German, English</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company providing a unified and democratized data, analytics, and AI platform. It has over 10,000 organisations worldwide relying on its services.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8446027002</Applyto>
      <Location>Berlin, Germany; Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a7cad02-cd5</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494155002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>25abb310-047</externalid>
      <Title>Manager, Field Engineering</Title>
      <Description><![CDATA[<p>As a Manager, Field Engineering, you will lead a diverse team of technical pre-sales Solutions Architects covering the Asean region.</p>
<p>You’ll play a key leadership role at the intersection of technology and business,guiding your team to design impactful solutions and drive growth through collaboration and innovation.</p>
<p>You’ll empower your team to communicate complex ideas with clarity, support strategic enterprise customer engagements, and build lasting partnerships with clients and internal stakeholders.</p>
<p>The impact you will have:</p>
<ul>
<li>Lead, mentor, and support a high-performing, inclusive pre-sales team</li>
</ul>
<ul>
<li>Foster an environment of belonging, continuous learning, and psychological safety that reflects Databricks’ values of customer focus, teamwork, and diversity.</li>
</ul>
<ul>
<li>Work closely with Sales and other teams to identify opportunities and deliver data-driven, value-focused solutions.</li>
</ul>
<ul>
<li>Establish best practices that improve team efficiency, collaboration, and business impact.</li>
</ul>
<ul>
<li>Build trusted relationships with customers and partners, acting as a strategic advisor in their digital transformation journey.</li>
</ul>
<ul>
<li>Partner with Marketing, Sales, Services, and other cross-functional teams to ensure a seamless customer experience.</li>
</ul>
<ul>
<li>Represent Databricks as part of the regional leadership team, contributing to our local presence and inclusive culture.</li>
</ul>
<p>What we’re looking for:</p>
<p>You do not need to fulfill every single requirement to be a strong candidate. If you are excited about this role and have related experience, we encourage you to apply.</p>
<ul>
<li>Proven experience leading or managing technical teams in Big Data, Cloud, or SaaS environments,or equivalent experience gained through hands-on engineering or consulting work.</li>
</ul>
<ul>
<li>A track record of coaching and developing diverse, high-performing teams.</li>
</ul>
<ul>
<li>Strong collaboration skills and the ability to partner effectively with Sales, Marketing, and other cross-functional leaders.</li>
</ul>
<ul>
<li>Excellent communication and interpersonal skills</li>
</ul>
<ul>
<li>Technical background in Data Engineering, Databases, or Data Science (consulting or customer-facing experience preferred).</li>
</ul>
<ul>
<li>Enthusiasm for data, AI, and cloud technologies, and the ability to connect these to real business outcomes.</li>
</ul>
<p>About Databricks:</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>
<p>To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p>Compliance:</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer’s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data, Cloud, SaaS, Data Engineering, Databases, Data Science, Technical leadership, Team management, Collaboration, Communication, Data, AI, and cloud technologies, Business outcomes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It Dont have information about the company size/scale.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8438767002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bbbd3f3a-5fe</externalid>
      <Title>Solutions Architect (Pre-sales) - Digital Native</Title>
      <Description><![CDATA[<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the Digital Native field.</p>
<p>You will help our customers to achieve tangible data-driven outcomes through the use of our The Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem. You&#39;ll grow as a leader in your field, while finding solutions to our customers&#39; biggest challenges in big data, analytics, data engineering and data science problems.</p>
<p>Responsibilities:</p>
<ul>
<li>Be a Big Data Analytics expert on aspects of architecture and design</li>
<li>Lead your prospects through evaluating and adopting Databricks</li>
<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>
<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>
<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>
<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>
<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>
<li>Experience designing and implementing architectures within public clouds (AWS, Azure or GCP)</li>
<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others.</li>
<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java and R is also desirable</li>
<li>Experience working with Enterprise Accounts</li>
<li>Written and verbal fluency in Japanese and English</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Analytics, Public Cloud, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform. It has over 10,000 organisations worldwide as customers.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8437026002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5f66d426-bea</externalid>
      <Title>Principal Software Engineer, Corporate AI</Title>
      <Description><![CDATA[<p>The Principal Software Engineer is a highly skilled expert responsible for shaping and executing the organization&#39;s intelligence vision. This role integrates expertise in Artificial Intelligence (AI), Machine Learning (ML), Automation, Data Analytics and Visualization to deliver transformative customer, partner, and colleague experiences that drive revenue growth and enhance productivity.</p>
<p>The position defines the technical direction for intelligence initiatives, leading the design, development, and deployment of robust, scalable, and secure AI solutions while fostering innovation through emerging technologies.</p>
<p>A critical aspect of the role is providing partnership, mentorship and technical guidance, cultivating a culture of excellence and continuous learning. Through close cross-functional collaboration across teams and stakeholders, the role ensures technical efforts are strategically aligned and deliver measurable impact.</p>
<p>Additionally, the position plays a central role in strategic problem-solving, addressing complex challenges in intelligence systems and data pipelines, and making informed architectural decisions that ensure long-term scalability and success.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Define, drive, and communicate the technical vision for intelligence, AI, and data initiatives, ensuring alignment with CIT strategy, EPD goals, and broader organisational objectives.</li>
</ul>
<ul>
<li>Take a holistic view of CIT systems and architecture to ensure they are scalable, reliable, secure, and maintainable over multiple years.</li>
</ul>
<ul>
<li>Lead the design, development, and deployment of high-performance AI systems, data pipelines, and intelligent services from conception through production.</li>
</ul>
<ul>
<li>Make strategic architectural decisions to address complex AI, data, and platform challenges, balancing short-term delivery with long-term resilience and scalability.</li>
</ul>
<ul>
<li>Identify opportunities to simplify systems, reduce operational and security risk, and improve developer productivity.</li>
</ul>
<ul>
<li>Contribute directly to prototyping, proof of concepts, and implementation of technical components when needed to validate strategy, de-risk decisions, or accelerate progress.</li>
</ul>
<ul>
<li>Architect, evolve, and scale AI, automation, and intelligence platforms that enable advanced analytics, personalisation, search, and intelligent decision-making.</li>
</ul>
<ul>
<li>Drive innovation in intelligence models, distributed training, optimisation techniques, and data engineering to maximise performance, quality, and business impact.</li>
</ul>
<ul>
<li>Enhance search and discovery capabilities using intelligent algorithms, natural language processing, and modern data systems.</li>
</ul>
<ul>
<li>Evaluate, select, and integrate emerging technologies in AI, ML, and automation to maintain a competitive and forward-looking technical posture.</li>
</ul>
<ul>
<li>Partner across engineering, product, design, infrastructure, and other stakeholders to ensure intelligence initiatives directly support strategic objectives.</li>
</ul>
<ul>
<li>Translate technical capabilities and advancements into clear business outcomes that improve productivity, efficiency, and growth.</li>
</ul>
<ul>
<li>Resolve conflicting requirements and priorities with sound technical judgment that favours long-term organisational outcomes over local optimisation.</li>
</ul>
<ul>
<li>Advocate for intelligence-driven solutions across the organisation and influence company-wide technical priorities.</li>
</ul>
<ul>
<li>Act as a trusted technical advisor to senior engineering leadership, with IC6 scope extending to org-wide and EPD-level strategy.</li>
</ul>
<ul>
<li>Provide mentorship and technical guidance to engineers and data scientists from mid-level through senior, fostering continuous learning and technical excellence.</li>
</ul>
<ul>
<li>Serve as a technical multiplier by raising the effectiveness of surrounding teams through design reviews, code reviews, architectural guidance, and pragmatic execution.</li>
</ul>
<ul>
<li>Facilitate knowledge sharing across teams through documentation, design write-ups, technical discussions, and mentorship programs.</li>
</ul>
<ul>
<li>Act as a voice for engineers by synthesising feedback, surfacing gaps and risks, and communicating them clearly to leadership.</li>
</ul>
<ul>
<li>Contribute to multi-year technical vision and roadmap planning, anticipating future scale, complexity, and organisational needs.</li>
</ul>
<ul>
<li>Identify architectural, operational, and security risks early and mobilise proactive mitigation plans across org boundaries.</li>
</ul>
<ul>
<li>Partner closely with managers, product leaders, and senior engineers to ensure ambitious initiatives remain feasible, sustainable, and well-aligned.</li>
</ul>
<ul>
<li>For IC6 scope, influence technical direction beyond CIT and partner directly with senior EPD leadership on company-wide strategy.</li>
</ul>
<ul>
<li>Lead and support critical, high-impact initiatives by defining technical direction, clarifying requirements, gathering estimates, and ensuring delivery against milestones.</li>
</ul>
<ul>
<li>Drive execution on complex projects with significant ambiguity or high cost of failure.</li>
</ul>
<ul>
<li>Improve engineering effectiveness by championing best practices such as CI, automated testing, reliability reviews, and clear ownership models.</li>
</ul>
<ul>
<li>Promote a bias toward action, thoughtful experimentation, and continuous learning.</li>
</ul>
<ul>
<li>Model excellence in engineering craft, collaboration, accountability, and inclusive behaviour.</li>
</ul>
<ul>
<li>Lead by example in living Dropbox values, including integrity, ownership, simplicity, and inclusivity.</li>
</ul>
<ul>
<li>Support hiring by interviewing, calibrating candidates against a high technical bar, and representing Dropbox authentically to candidates and partners.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>12+ years of professional experience in software engineering, with depth in areas such as intelligent workflows, enterprise-scale AI adoption, automation, or data engineering.</li>
</ul>
<ul>
<li>Proven track record of leading large-scale, multi-team technical initiatives from conception to production, including solving ambiguous problems, setting technical vision, and driving impact without direct authority.</li>
</ul>
<ul>
<li>Strong architectural judgment and systems thinking, with the ability to balance short-term delivery with long-term sustainability, scalability, and operational excellence.</li>
</ul>
<ul>
<li>Demonstrated ability to influence across teams and disciplines through technical leadership, collaboration, and sound decision-making rather than formal authority.</li>
</ul>
<ul>
<li>Experience mentoring engineers and raising the technical bar of an organisation through design reviews, code reviews, and technical guidance.</li>
</ul>
<ul>
<li>Exceptional written and verbal communication skills, with the ability to clearly explain complex technical concepts, translate technical strategy to diverse audiences, and influence stakeholders at all levels.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Strong coding ability in at least one language commonly used in AI and data systems such as Python, Java, Go, or Scala, with hands-on experience building models, data pipelines, or scalable production services.</li>
</ul>
<ul>
<li>Experience operating in platform, infrastructure, or internal tooling organisations, including leading or significantly influencing org-wide technical initiatives.</li>
</ul>
<ul>
<li>Proven ability to navigate ambiguity and competing priorities, drive clarity, and make sound technical and product trade-offs in partnership with product managers.</li>
</ul>
<ul>
<li>Experience collaborating cross-functionally with product, design, infrastructure, and legal or privacy stakeholders to deliver AI-powered or data-intensive products responsibly.</li>
</ul>
<ul>
<li>Familiarity with AI-assisted development practices in large codebases, along with experience representing engineering externally through talks, blogs, or industry events when applicable.</li>
</ul>
<p><strong>Compensation:</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Artificial Intelligence, Machine Learning, Automation, Data Analytics, Visualization, Python, Java, Go, Scala, Cloud Storage, File-Sharing, Software Engineering, Intelligent Workflows, Enterprise-Scale AI Adoption, Data Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox is a cloud storage and file-sharing service provider.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7537004</Applyto>
      <Location>Remote - Canada: Select locations</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3beddc8f-183</externalid>
      <Title>Staff Data Systems Analyst</Title>
      <Description><![CDATA[<p>At ZoomInfo, we&#39;re looking for a Senior Data Systems Analyst to join our team. As a key member of our data operations team, you&#39;ll be responsible for building deep expertise in our company data pipeline, which ingests, processes, and profiles millions of company records. Your primary focus will be on mastering our pipeline architecture, contributing to our infrastructure transition, and leading strategic data improvement initiatives.</p>
<p>In your first 6-12 months, you&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth. As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Mastering our company data pipeline architecture, including how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>
<li>Reading and analyzing production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>
<li>Developing frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>
<li>Creating clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>
</ul>
<ul>
<li>Contributing to pipeline evolution and infrastructure improvements by participating in design conversations with Engineering and Product, validating pipeline improvements through rigorous testing, and translating data quality investigations and emerging requirements into system-level improvement opportunities</li>
</ul>
<ul>
<li>Solving complex, ambiguous data challenges by leading or contributing to data improvement initiatives that require both systems thinking and creative problem-solving</li>
</ul>
<ul>
<li>Building partnerships and institutional knowledge by developing strong working relationships with Data Acquisition, Product, Engineering, and fellow data analysts, conducting impact analyses and validation studies, and documenting your learning, approaches, and insights</li>
</ul>
<p>We&#39;re looking for a highly skilled individual with a strong background in data analytics, data engineering, or related technical roles. You should have experience working with data pipelines, ETL systems, or data processing infrastructure, and be able to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility.</p>
<p>Required qualifications include:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>
<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>
<li>Experience working with data pipelines, ETL systems, or data processing infrastructure</li>
<li>Ability to read and understand code (Python, Java, SQL, or similar)</li>
<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>
<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>
<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>
<li>Strong analytical skills with ability to investigate complex issues systematically</li>
<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>
<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>
</ul>
<p>Preferred qualifications include experience with company data, business data, web data acquisition, or data quality initiatives, as well as experience with data profiling, entity resolution, record linkage, or data matching systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data analytics, data engineering, data pipelines, ETL systems, data processing infrastructure, Python, Java, SQL, data transformation, system logic, technical feasibility</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo provides software solutions for sales and marketing professionals.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8408622002</Applyto>
      <Location>Vancouver, Washington, United States; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a03720f6-bc3</externalid>
      <Title>Solutions Architect</Title>
      <Description><![CDATA[<p>As a Solutions Architect at Databricks, you will partner with our customers to design scalable data architectures using Databricks technology and services.</p>
<p>You have technical depth and business knowledge and can drive complex technology discussions which express the value of the Databricks platform throughout the sales lifecycle.</p>
<p>In partnership with our Account Executives, you will engage with our customers&#39; technical leads, including architects, engineers, and operations teams with the goal of establishing yourself as a trusted advisor to achieve tangible outcomes.</p>
<p>You will work with teams across Databricks and our executive leadership to represent your customer&#39;s needs and build valuable customer engagements and report to the Field Engineering Manager.</p>
<p>The impact you will have:</p>
<ul>
<li>Work with Sales and other essential partners to develop account strategies for your assigned accounts to grow their usage of the platform.</li>
</ul>
<ul>
<li>Establish the Databricks Lakehouse architecture as the standard data architecture for customers through excellent technical account planning.</li>
</ul>
<ul>
<li>Build and present reference architectures and demo applications for prospects to help them understand how Databricks can be used to achieve their goals to land new users and use cases.</li>
</ul>
<ul>
<li>Capture the technical win by consulting on big data architectures, data engineering pipelines, and data science/machine learning projects; prove out the Databricks technology for strategic customer projects; and validate integrations with cloud services and other 3rd party applications.</li>
</ul>
<ul>
<li>Become an expert in, and promote Databricks inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years in a customer-facing pre-sales, technical architecture, or consulting role with expertise in at least one of the following technologies:</li>
</ul>
<ul>
<li>Big data engineering (Ex: Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Warehousing &amp; ETL (Ex: SQL, OLTP/OLAP/DSS)</li>
</ul>
<ul>
<li>Data Science and Machine Learning (Ex: pandas, scikit-learn, HPO)</li>
</ul>
<ul>
<li>Data Applications (Ex: Logs Analysis, Threat Detection, Real-time Systems Monitoring, Risk Analysis and more)</li>
</ul>
<ul>
<li>Experience translating a customer&#39;s business needs to technology solutions, including establishing buy-in with essential customer stakeholders at all levels of the business.</li>
</ul>
<ul>
<li>Experienced at designing, architecting, and presenting data systems for customers and managing the delivery of production solutions of those data architectures.</li>
</ul>
<ul>
<li>Fluent in SQL and database technology.</li>
</ul>
<ul>
<li>Debug and development experience in at least one of the following languages: Python, Scala, Java, or R.</li>
</ul>
<ul>
<li>Desired: Built solutions with public cloud providers such as AWS, Azure, or GCP</li>
</ul>
<ul>
<li>Desired: Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)</li>
</ul>
<ul>
<li>Travel to customers in your region up to 30% of the time.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$164,500-$224,000 CAD</Salaryrange>
      <Skills>Big data engineering, Data Warehousing &amp; ETL, Data Science and Machine Learning, Data Applications, SQL and database technology, Python, Scala, Java, or R, Built solutions with public cloud providers such as AWS, Azure, or GCP, Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5898477002</Applyto>
      <Location>Toronto, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc79e6e5-5c0</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, Data engineering, Data science, Cloud technology</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494156002</Applyto>
      <Location>Seattle, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>19182c1d-b27</externalid>
      <Title>Solutions Architect - UAE</Title>
      <Description><![CDATA[<p>At Databricks, our core values are at the heart of everything we do; creating a culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone.</p>
<p>We aim to inspire our customers to make informed decisions that push their business forward. We provide a user-friendly and intuitive platform that makes it easy to turn insights into action and fosters a culture of creativity, experimentation, and continuous improvement.</p>
<p>As a Solutions Architect in the UAE Pre-Sales team, you will be an essential part of this mission, using your technical expertise to demonstrate how our Data Intelligence Platform can help customers solve their complex data challenges.</p>
<p>You&#39;ll work with a collaborative, customer-focused team that values innovation and creativity, using your skills to create customised solutions to help our customers achieve their goals and guide their businesses forward.</p>
<p>Join us in our quest to change how people work with data and make a better world!</p>
<p>The impact you will have:</p>
<ul>
<li>Create impactful and successful relationships with customer accounts in the United Arab Emirates, providing technical and business value to Databricks customers in collaboration with the extended team.</li>
</ul>
<ul>
<li>Become the trusted advisor of your customer on the Data and AI landscape by successfully driving and delivering the adoption of the Databricks Data Intelligence Platform.</li>
</ul>
<ul>
<li>Enabling Partners and support internal events in the MEA region.</li>
</ul>
<ul>
<li>Scale best practices in your field by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>
</ul>
<ul>
<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Experienced in customer interactions in a technical pre-sales capacity and adept in managing complex sales lifecycles.</li>
</ul>
<ul>
<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences, requiring an ability to switch context and/or levels of technical depth.</li>
</ul>
<ul>
<li>Ability to provide technical solutions for specialised customer needs, navigate a competitive landscape and effectively develop relationships to achieve long-term customer success.</li>
</ul>
<ul>
<li>Hands-on expertise with complex Big Data architecture design for public cloud platform(s) solutions, focusing on use cases in Data Warehousing and Data Engineering architecture and implementation.</li>
</ul>
<p>Data Science and Machine Learning skills will be advantageous.</p>
<ul>
<li>Prior experience with coding in a core programming language (i.e., Python, SQL etc.) and willingness to learn Apache Spark™.</li>
</ul>
<ul>
<li>Experience and skills on the Databricks platform will be highly advantageous for the role!</li>
</ul>
<ul>
<li>Excellent communication skills in English required as a minimum. Fluency in Arabic will be highly preferable for the position.</li>
</ul>
<p>Key Notes:</p>
<ul>
<li>Location for the role will be in Paris (i.e. within a commutable distance for a hybrid schedule).</li>
</ul>
<ul>
<li>You will need to be flexible and willing to travel to the United Arab Emirates for customer visits on a regular basis (i.e. up to ~2 weeks per month).</li>
</ul>
<ul>
<li>We are seeking a candidate that will be interested in a future relocation to the region (Dubai) when an office is opened.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>customer interactions, technical pre-sales capacity, complex sales lifecycles, use case discovery, solution architecture designs, Big Data architecture design, public cloud platform(s), Data Warehousing, Data Engineering, Apache Spark, Python, SQL, Data Science, Machine Learning, Arabic</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8287419002</Applyto>
      <Location>Paris, France</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>48e2e160-bde</externalid>
      <Title>Senior Solutions Architect - Weights &amp; Biases</Title>
      <Description><![CDATA[<p>Our Solutions Architecture team at Weights &amp; Biases is a unique hybrid organization, combining the deep technical skills of Site Reliability Engineering with the consultative expertise of Solutions Architecture. We focus on ensuring customers can successfully deploy and operate W&amp;B across cloud and on-prem environments while delivering a best-in-class experience that accelerates ML adoption at scale.</p>
<p>As a Solutions Architect, you will be responsible for managing complex customer deployments across AWS, GCP, Azure, and on-prem environments. You’ll partner directly with customer engineering teams to provision and monitor services, debug and resolve infrastructure issues, and ensure performance and scalability using SRE best practices. This role blends hands-on technical problem-solving with customer-facing engagement, including technical discussions, demos, workshops, and enablement content creation. You’ll work closely with Sales Engineering, Field Engineering, Support, and Product to drive adoption and influence our product roadmap based on customer feedback.</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love diving into infrastructure problems and solving them systematically</li>
<li>You’re curious about how to scale complex ML systems in production environments</li>
<li>You’re an expert in building and running containerized, distributed systems</li>
</ul>
<p>We work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary ranges for this role is $180,000 to $200,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 to $200,000</Salaryrange>
      <Skills>Docker, Kubernetes, Helm charts, Networking, Cloud-managed services (e.g., MySQL, Object Stores), Infrastructure as Code (IaC), preferably Terraform, Linux/Unix command line experience, Python, ML workflows or tools, Deep proficiency in Kubernetes design patterns, including Operators, Familiarity with data engineering and MLOps tooling, Experience as an educator or facilitator for technical training sessions, workshops, or demos, SaaS, web service, or distributed systems operations experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4622845006</Applyto>
      <Location>Livingston, NJ / New York, NY / San Francisco, CA / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8ec75b5e-8fc</externalid>
      <Title>Solutions Architect (Adelaide)</Title>
      <Description><![CDATA[<p>You will be an essential part of our mission to create a culture of proactiveness and customer-centric mindset, guiding us to create a unified platform that makes data science and analytics accessible to everyone. You&#39;ll work with a collaborative, customer-focused team that values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>
<p>The impact you will have:</p>
<ul>
<li>Form successful relationships with clients throughout your assigned territory, providing technical and business value to Databricks customers in collaboration with Account Executives.</li>
<li>Operate as an expert in big data analytics to excite customers about Databricks, developing into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</li>
<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>
<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions.</li>
<li>Develop customer relationships and build internal partnerships with account executives and teams.</li>
<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark.</li>
<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s).</li>
<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, Big Data Analytics, Spark, Cloud Computing, Data Science, Machine Learning, Data Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. Over 10,000 organisations worldwide rely on its Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8472735002</Applyto>
      <Location>Remote - Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a57339aa-939</externalid>
      <Title>Staff Data Engineer, tvScientific</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Staff Data Engineer to lead the design, implementation, and evolution of our identity services and data governance platform. This role is critical to ensuring trusted, privacy-safe, and well-governed data across the organization.</p>
<p>You will work at the intersection of data engineering, identity resolution, privacy, and platform reliability. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and maintain a scalable identity resolution platform</li>
<li>Build pipelines and services to ingest, normalize, link, and version identity data across multiple sources</li>
<li>Ensure deterministic and probabilistic matching logic that is transparent, auditable, and measurable</li>
<li>Partner with product and analytics teams to expose identity data through reliable, well-documented APIs and datasets</li>
<li>Build and operate batch and streaming pipelines using modern data stack tools</li>
<li>Create clear documentation, standards, and runbooks for identity and governance systems</li>
<li>Own data governance foundations including data lineage, quality checks, schema enforcement, and access controls</li>
<li>Implement privacy-by-design principles (PII handling, consent enforcement, retention policies)</li>
<li>Collaborate with legal, privacy, and security teams to operationalize regulatory requirements (e.g., GDPR, CCPA)</li>
<li>Establish monitoring and alerting for data quality, freshness, and integrity</li>
</ul>
<p>What we&#39;re looking for:</p>
<ul>
<li>Production data engineering experience</li>
<li>Bachelor’s degree in computer science, related field or equivalent experience</li>
<li>Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala</li>
<li>Experience in delivering significant technical initiatives and building reliable, large scale services</li>
<li>Experience in delivering APIs backed by relationship-heavy datasets</li>
<li>Experience implementing data governance practices, including data quality, metadata management, and access controls</li>
<li>Strong understanding of privacy-by-design principles and handling of sensitive or regulated data</li>
<li>Familiarity with data lakes, cloud warehouses, and storage formats</li>
<li>Strong proficiency in AWS services</li>
<li>Excellent written and verbal communication skills</li>
<li>Successful design and implementation of scalable and efficient data infrastructure</li>
<li>High attention to detail in implementation of automated data quality checks</li>
<li>Effective collaboration with cross-functional teams</li>
<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs</li>
<li>Strong track record of critical evaluation and verification of AI-assisted work (e.g., testing, source-checking, data validation, peer review)</li>
<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$177,185-$364,795 USD</Salaryrange>
      <Skills>Spark, Scala, Data Engineering, Identity Resolution, Privacy, Platform Reliability, Data Governance, Data Lineage, Quality Checks, Schema Enforcement, Access Controls, AWS Services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>tvScientific</Employername>
      <Employerlogo>https://logos.yubhub.co/tvscientific.com.png</Employerlogo>
      <Employerdescription>tvScientific is a technology company that provides a CTV advertising platform for performance marketers.</Employerdescription>
      <Employerwebsite>https://www.tvscientific.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7642253</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>165c3a6f-f1e</externalid>
      <Title>Data Engineer, Analytics</Title>
      <Description><![CDATA[<p>We are looking for an experienced Data Engineer, Analytics to join our data team. As a Data Engineer, Analytics, you will be responsible for owning the transformation and semantic layer that turns data into clean, tested, well-documented tables and dashboards that data scientists, product managers, and business stakeholders can trust and self-serve from.</p>
<p>You will define and operationalize the metrics that inform how we identify opportunities, measure success, and make decisions. This includes designing, building, and maintaining curated analytical datasets and data models that serve as the canonical sources for metrics, dashboards, and analyses.</p>
<p>You will partner closely with data science, product managers, and engineering teams to translate business questions into well-modeled, performant, and discoverable data assets. You will execute metric workflows, from metric definition and logging schema design to data modeling and visualization, with guidance from manager and senior team members.</p>
<p>You will also build and maintain executive-level dashboards and self-serve reporting tools that enable business stakeholders to answer their own questions.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain curated analytical datasets and data models that serve as the canonical sources for metrics, dashboards, and analyses.</li>
</ul>
<ul>
<li>Partner closely with data science, product managers, and engineering teams to translate business questions into well-modeled, performant, and discoverable data assets.</li>
</ul>
<ul>
<li>Execute metric workflows, from metric definition and logging schema design to data modeling and visualization, with guidance from manager and senior team members.</li>
</ul>
<ul>
<li>Build and maintain executive-level dashboards and self-serve reporting tools that enable business stakeholders to answer their own questions.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years of experience in analytics or data engineering with a strong focus on building curated, consumer-facing datasets.</li>
</ul>
<ul>
<li>2+ years of experience in designing, developing, and maintaining robust data models from structured and unstructured sources to power a variety of use cases, including experimentation.</li>
</ul>
<ul>
<li>2+ years of experience writing accurate and effective SQL.</li>
</ul>
<ul>
<li>Fluency in Python or another programming language.</li>
</ul>
<ul>
<li>Experience building and owning executive-level dashboards and reports using BI tools (e.g., Looker, Tableau, or similar).</li>
</ul>
<ul>
<li>Strong business acumen, you will partner with data scientists and product managers to translate ambiguous questions into concrete metric definitions and data models.</li>
</ul>
<ul>
<li>Excellent communication, comfortable being the connective tissue between technical and business teams.</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Passion for Discord or online communities.</li>
</ul>
<ul>
<li>Experience building or contributing to a semantic layer or metrics store.</li>
</ul>
<ul>
<li>Experience with modern analytics and data engineering tools (dbt, BigQuery etc).</li>
</ul>
<ul>
<li>Experience implementing and monitoring audits for data quality with massive data sets (e.g. billions of rows).</li>
</ul>
<ul>
<li>Experience working on SEO, GEO, or other top of funnel growth focused features.</li>
</ul>
<ul>
<li>Experience collaborating with compliance, legal, or litigation cross-functional teams.</li>
</ul>
<p>The US base salary range for this full-time position is $160,000 to $180,000 + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $180,000 + equity + benefits</Salaryrange>
      <Skills>data engineering, analytics, SQL, Python, BI tools, data modeling, metric definition, data visualization, semantic layer, metrics store, modern analytics tools, data quality audits, SEO, GEO, compliance, legal, litigation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a platform used by over 200 million people every month for various purposes, including playing video games. It plays a significant role in the future of gaming.</Employerdescription>
      <Employerwebsite>https://discord.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8371252002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4ea7999b-3d8</externalid>
      <Title>Resident Solutions Architect - Healthcare &amp; Life Sciences</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494145002</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>08d03f20-666</externalid>
      <Title>Finance Systems Integration Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Finance Systems Integration Engineer to support our finance systems transformation at one of the fastest-growing AI companies. You&#39;ll design and build integrations connecting our ERP platform with critical financial applications and support our ERP implementation initiatives.</p>
<p>As you master our integration landscape, you&#39;ll have opportunities to expand into Claude-powered AI automation and data pipeline development.</p>
<p>You&#39;ll build the integration backbone for one of the fastest-growing AI companies, with a front-row seat to how Claude transforms financial operations. This is a foundational role where you&#39;ll shape our integration architecture from the ground up, then expand into cutting-edge AI automation as our needs evolve.</p>
<p>In this role, you will:</p>
<ul>
<li>Design, build, and maintain integrations connecting ERP systems with downstream applications, including ZipHQ, Brex, Navan, Clearwater, Payroll systems, Salesforce, and other critical financial platforms using Workato, MuleSoft, or similar iPaaS solutions.</li>
</ul>
<ul>
<li>Support integration development and testing during the ERP implementation projects.</li>
</ul>
<ul>
<li>Develop and maintain REST APIs, webhooks, and OAuth 2.0 authentication flows for secure system-to-system communication.</li>
</ul>
<ul>
<li>Implement real-time and batch integration patterns supporting high-volume financial transactions.</li>
</ul>
<ul>
<li>Establish monitoring, alerting, and error-handling frameworks to ensure integration reliability and data integrity.</li>
</ul>
<ul>
<li>Document integration architectures, data flows, API specifications, and troubleshooting procedures.</li>
</ul>
<ul>
<li>Collaborate with implementation consulting partners and vendors on technical integration requirements.</li>
</ul>
<p>Additional scope includes AI automation and data infrastructure, including AI agent development, data pipeline support, governance, and collaboration.</p>
<p>You may be a good fit if you have 8+ years of experience in integration development, data engineering, or systems engineering roles, possess hands-on experience with iPaaS platforms, and have strong programming skills in Python and/or JavaScript/TypeScript.</p>
<p>Strong candidates may also have experience with high-growth technology companies, background in AI/ML companies, and hands-on experience with specific platforms, including Workday Financials, Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management, and programming skills in Python/JavaScript.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>integration development, data engineering, systems engineering, iPaaS platforms, Python, JavaScript/TypeScript, REST APIs, webhooks, OAuth 2.0, secure system-to-system communication, real-time and batch integration patterns, high-volume financial transactions, monitoring, alerting, error-handling frameworks, integration reliability, data integrity, API specifications, troubleshooting procedures, AI automation, data infrastructure, AI agent development, data pipeline support, governance, collaboration, high-growth technology companies, AI/ML companies, specific platforms, Workday Financials, Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It is a quickly growing group of committed researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5155195008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6c1cd36d-464</externalid>
      <Title>Senior Security Operations Engineer, Detection &amp; Response</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. As of February 2025, we’ve grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</p>
<p>We’re backed by top-tier investors including Andreessen Horowitz, Sequoia Capital, and Altimeter. At our core, we believe in empowering data practitioners:</p>
<ul>
<li>Reliable, high-quality data is the fuel that propels AI-powered data engineering.</li>
<li>AI is changing data work, fast. dbt’s data control plane keeps data engineers ahead of that curve.</li>
<li>We empower engineers to deliver reliable, governed data faster, cheaper, and at scale.</li>
</ul>
<p>About the Security Team</p>
<p>The mission of the Security Engineering team at dbt Labs is to provide clear, opinionated security guidance and scalable, secure-by-default offerings to engineers for the purpose of securing software development and enabling pragmatic risk decisions at dbt.</p>
<p><strong>Responsibilities</strong></p>
<p>As a Senior Security Operations Engineer on the Detection &amp; Response team, you will strengthen and maintain the company&#39;s security posture throughout the threat detection lifecycle from telemetry collection and continuous monitoring through threat detection, incident response, and security event management. You will serve as a subject matter expert for security operations across the dbt Labs&#39; teams and technology infrastructure, including multi-cloud production environments, identity, endpoints, and SaaS technologies.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Participate in a 24/7 on-call rotation providing coverage for active security incidents, investigations, and security events across our global infrastructure.</li>
<li>Lead investigation and remediation of security incidents, coordinating cross-functional response efforts to minimize impact and recovery time.</li>
<li>Play a major role in bootstrapping an end to end D&amp;R alert and investigation pipeline.</li>
<li>Triage and investigate security alerts from detection tools including Wiz Defend, Crowdstrike, and cloud security platforms to identify genuine threats and reduce false positives.</li>
<li>Develop and maintain detection rules, runbooks, and response procedures mapped to the company&#39;s threat model.</li>
<li>Automate alert triage workflows and improve mean time to detection and response through tooling and process enhancements, including leveraging AI enrichment and processing.</li>
<li>Collaborate with Infrastructure and Application Security teams to implement secure-by-design principles and remediate identified security issues.</li>
<li>Conduct security event analysis to identify policy violations, misconfigurations, and potential attack vectors before they become incidents.</li>
<li>Partner with our Enterprise Security &amp; Technology team to enhance endpoint security controls and monitoring across endpoints (MacOS laptops &amp; some Windows and Linux-based development environments).</li>
<li>Design and facilitate tabletop exercises and game days to test detection, response, recovery, and remediation capabilities.</li>
<li>Contribute to the maturation of the security incident response program through documentation, training, and process improvements.</li>
<li>Mentor junior security engineers and cross-functional team members on incident handling best practices.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Demonstrated ability to excel in high-pressure situations; we need someone who can make sound decisions during active security incidents and can calmly serve as incident commander with confidence.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Have 8+ years of professional experience in security-related domains, including at least 4 years in security operations, incident response, threat hunting, or threat detection roles.</li>
<li>Have demonstrable experience leading security incident investigations and coordinating cross-team response efforts.</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay.</li>
<li>Opportunity to work with a leading analytics engineering platform and contribute to the growth and success of the company.</li>
<li>Collaborative and dynamic work environment with a team of experienced professionals.</li>
<li>Opportunities for professional growth and development.</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you are a motivated and experienced security professional looking for a new challenge, please submit your resume and cover letter to [insert contact information]. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Security Operations, Incident Response, Threat Hunting, Threat Detection, Cloud Security, Endpoint Security, Security Event Analysis, Security Incident Response, Tabletop Exercises, Game Days, Documentation, Training, Process Improvements, Mentoring, Security Engineering, Data Control Plane, Analytics Engineering, AI-Powered Data Engineering, Reliable High-Quality Data, Secure-By-Default Offerings, Pragmatic Risk Decisions, Multi-Cloud Production Environments, Identity, Endpoints, SaaS Technologies, Wiz Defend, Crowdstrike, Cloud Security Platforms, Detection Rules, Runbooks, Response Procedures, Mean Time to Detection, Mean Time to Response, AI Enrichment, AI Processing, Secure-By-Design Principles, Infrastructure Security, Application Security, Endpoint Security Controls, Monitoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, used by over 90,000 teams every week, with annual recurring revenue (ARR) surpassing $100 million.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4674498005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4083515a-f79</externalid>
      <Title>Senior Solutions Engineer (Pre-Sales)</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re creating a culture of proactiveness and customer-centricity to make data science and analytics accessible to everyone. As a Senior Solutions Engineer in the Benelux Field Engineering team, you&#39;ll be an essential part of this mission, using your technical expertise to demonstrate how our Data Intelligence Platform can help customers solve their complex data challenges.</p>
<p>You&#39;ll work with a collaborative, customer-focused team who values innovation and creativity, using your skills to create customised solutions to help our customers achieve their goals and guide their businesses forward.</p>
<p>Some of the key responsibilities of this role include:</p>
<ul>
<li>Forming successful relationships with clients throughout your assigned territory to provide technical and business value in collaboration with an Account Executive and a Senior Solutions Architect.</li>
<li>Gaining excitement from clients about Databricks through hands-on evaluation and Spark programming, integrating with the wider cloud ecosystem and 3rd party applications.</li>
<li>Contributing to building the Databricks technical community through engagement at workshops, seminars, and meet-ups.</li>
<li>Becoming a Big Data Analytics advisor on aspects of architecture and design.</li>
<li>Supporting your customers by authoring reference architectures, how-tos, and demo applications.</li>
</ul>
<p>We&#39;re looking for someone with a strong aptitude and familiarity working with clients, creating a narrative, aligning the agenda based on business priorities, and achieving tangible outcomes. Experience in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences, requiring an ability to context switch in levels of technical depth, is also essential.</p>
<p>Some of the key requirements for this role include:</p>
<ul>
<li>Strong knowledge of Big Data Analytics and experience with cloud platforms (e.g., Databricks, AWS, Microsoft, GCP, or other relevant platforms).</li>
<li>Proficiency with Artificial Intelligence, AI Agents, and Big Data Analytics technologies, including hands-on expertise with sophisticated proofs-of-concept and public cloud platform(s).</li>
<li>Experience diving deeper into solution architecture and Data Engineering.</li>
<li>Coding expertise in a core programming language (i.e., Python, Java, Scala).</li>
<li>A foundational understanding of Apache Spark architecture is preferable; hands-on skills will be advantageous for the role.</li>
</ul>
<p>This will be an office-based contract (Amsterdam), with a hybrid schedule. An initial 12-month fixed-term contract is to be offered, with the expectation of extending/converting to FTE upon successful completion of the probation period.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Analytics, Cloud Platforms, Artificial Intelligence, AI Agents, Solution Architecture, Data Engineering, Apache Spark, Python, Java, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8346832002</Applyto>
      <Location>Amsterdam, Netherlands</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1dccad30-005</externalid>
      <Title>Senior Solutions Engineer</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re looking for a Senior Solutions Engineer to join our Field Engineering team. As a key member of our team, you will work with clients to provide technical and business value, collaborating with an Account Executive and a Senior Solutions Architect. Your primary focus will be on demonstrating how our Data Intelligence Platform can help customers solve their complex data challenges.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Forming successful relationships with clients throughout your assigned territory to provide technical and business value.</li>
<li>Gaining excitement from clients about Databricks through hands-on evaluation and Spark programming, integrating with the wider cloud ecosystem and 3rd party applications.</li>
<li>Contributing to building the Databricks technical community through engagement at workshops, seminars, and meet-ups.</li>
<li>Becoming a Big Data Analytics advisor on aspects of architecture and design.</li>
<li>Supporting customers by authoring reference architectures, how-tos, and demo applications.</li>
</ul>
<p>To succeed in this role, you should have a strong background in Big Data Analytics, with experience working with clients, creating narratives, and answering customer questions. You should also be able to independently deliver a technical proposition, identify customers&#39; pain-points, and explain important areas for business value.</p>
<p>We&#39;re looking for someone who is a good communicator, with excellent problem-solving skills and the ability to work well under pressure. If you&#39;re passionate about data and AI, and want to join a dynamic team, please apply!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Analytics, Spark programming, Cloud ecosystem, Data Engineering, Technical proposition, Python, SQL, Data Science, Analytics, AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. Over 10,000 organisations worldwide use its Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8239195002</Applyto>
      <Location>Remote - Denmark</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e09bd299-1d7</externalid>
      <Title>Senior Sales Engineer (HealthTech)</Title>
      <Description><![CDATA[<p>We are seeking a Senior Sales Engineer to join our team at Komodo Health. As a Senior Sales Engineer, you will be responsible for leading complex sales cycles and providing technical expertise to our clients. You will work closely with our sales and account teams to understand client needs and develop solutions that meet those needs.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead complex sales cycles and provide technical expertise to clients</li>
<li>Work closely with sales and account teams to understand client needs and develop solutions</li>
<li>Develop and maintain relationships with key clients and stakeholders</li>
<li>Collaborate with cross-functional teams to develop and implement sales strategies</li>
<li>Stay up-to-date with industry trends and developments to ensure that our solutions meet the evolving needs of our clients</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in sales engineering or a related field</li>
<li>Deep understanding of healthcare technology and data services</li>
<li>Excellent communication and interpersonal skills</li>
<li>Ability to work in a fast-paced environment and adapt to changing priorities</li>
<li>Strong analytical and problem-solving skills</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Advanced certifications in cloud platforms or specialized certifications in data engineering/analytics</li>
<li>Experience in a leadership or mentorship capacity within a sales engineering or solutions team</li>
<li>Familiarity with advanced CRM functionalities and sales enablement platforms</li>
<li>A track record of contributing to industry thought leadership</li>
</ul>
<p>The pay range for this role is $120,000 - $180,000 per year, and is eligible for commissions and equity awards. Benefits include health insurance, retirement savings plan, and paid time off.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$120,000 - $180,000 per year</Salaryrange>
      <Skills>sales engineering, healthcare technology, data services, communication, interpersonal skills, analytical skills, problem-solving skills, cloud platforms, data engineering/analytics, leadership, mentorship, CRM functionalities, sales enablement platforms</Skills>
      <Category>Sales</Category>
      <Industry>Healthcare</Industry>
      <Employername>Komodo Health</Employername>
      <Employerlogo>https://logos.yubhub.co/komodohealth.com.png</Employerlogo>
      <Employerdescription>Komodo Health is a healthcare technology company that has developed a comprehensive suite of software applications to help healthcare organisations unlock critical insights and track patient behaviours.</Employerdescription>
      <Employerwebsite>https://www.komodohealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/komodohealth/jobs/8214177002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>100be909-8a8</externalid>
      <Title>Senior Solutions Engineer</Title>
      <Description><![CDATA[<p>You will be an essential part of our mission to create a unified platform that makes data science and analytics accessible to everyone. As a Senior Solutions Engineer, you will use your technical expertise to demonstrate how our Data Intelligence Platform can help customers solve their complex data challenges.</p>
<p>You&#39;ll work with a collaborative, customer-focused team who values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Forming successful relationships with clients throughout your assigned territory to provide technical and business value in collaboration with an Account Executive and a Senior Solutions Architect.</li>
<li>Gaining excitement from clients about Databricks through hands-on evaluation and Spark programming, integrating with the wider cloud ecosystem and 3rd party applications.</li>
<li>Contributing to building the Databricks technical community through engagement at workshops, seminars, and meet-ups.</li>
<li>Becoming a Big Data Analytics advisor on aspects of architecture and design.</li>
<li>Supporting your customers by authoring reference architectures, how-tos, and demo applications.</li>
<li>Developing both technically and in the pre-sales aspect with the goal of becoming an independently operating Solutions Architect.</li>
</ul>
<p>We look for individuals who are familiar with working with clients, creating a narrative, answering customer questions, aligning the agenda with important interests, and achieving tangible outcomes. You should be able to independently deliver a technical proposition, identify customers&#39; pain-points, and explain important areas for business value to develop a trusted advisor skillset.</p>
<p>The ideal candidate will have knowledge of a core programming language such as Python, and be knowledgeable in a core Big Data Analytics domain with some exposure to advanced proofs-of-concept and an understanding of a major public cloud platform. Experience diving deeper into solution architecture and Data Engineering is also desirable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Big Data Analytics, Spark, Cloud Ecosystem, Solution Architecture, Data Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. Over 10,000 organisations worldwide use the Databricks Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8025494002</Applyto>
      <Location>Aarhus, Denmark; Remote - Denmark</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ca21d379-481</externalid>
      <Title>AI Solutions Engineer, Post Sales- W&amp;B</Title>
      <Description><![CDATA[<p>The Field Engineering team at Weights &amp; Biases plays a vital role in ensuring customer success and adoption of our platform. As part of this team, we partner with Sales, Support, Product, and Engineering to lead technical success after the sales process.</p>
<p>We work closely with some of the most advanced AI teams in the world, helping them build, optimize, and scale their ML and GenAI workflows across industries such as computer vision, robotics, natural language processing, and large language models (LLMs).</p>
<p>We’re hiring an AI Solutions Engineer, Post-Sales to help customers solve real-world problems by enabling them to implement and scale ML pipelines and agentic workflows using Weights &amp; Biases. In this role, you’ll collaborate with engineering teams to ensure smooth onboarding and adoption, act as a trusted advisor on best practices, and represent the voice of the customer internally.</p>
<p>You will partner directly with leading AI teams to optimize workflows, share technical expertise, and influence our product roadmap based on real-world customer feedback.</p>
<p>This is an ideal opportunity for ML practitioners who are customer-focused and eager to work with top AI companies globally.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Collaborate with engineering teams to ensure smooth onboarding and adoption of Weights &amp; Biases</li>
<li>Act as a trusted advisor on best practices for implementing and scaling ML pipelines and agentic workflows</li>
<li>Represent the voice of the customer internally and influence our product roadmap based on real-world customer feedback</li>
<li>Partner directly with leading AI teams to optimize workflows and share technical expertise</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3–5 years of relevant experience in a similar role</li>
<li>Strong programming proficiency in Python</li>
<li>Hands-on experience enabling production-grade ML systems, with a focus on training and inference pipelines, experiment tracking, deployment patterns, and observability using deep learning frameworks (TensorFlow/Keras, PyTorch/PyTorch Lightning) and MLOps tooling (e.g. Airflow, Kubeflow, Ray, TensorRT)</li>
<li>Familiarity with cloud platforms (AWS, GCP, Azure)</li>
<li>Experience with GenAI/LLMs and related tools (e.g. LangChain/LangGraph, HuggingFace Transformers, Pinecone, Weaviate)</li>
<li>Strong experience with Linux/Unix</li>
<li>Excellent communication and presentation skills, both written and verbal</li>
<li>Ability to break down and solve complex problems through customer consultation and execution</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Background in robotics</li>
<li>TypeScript experience</li>
<li>Proficiency with Fastai, scikit-learn, XGBoost, or LightGBM</li>
<li>Background in data engineering, MLOps, or LLMOps, with tools such as Docker and Kubernetes</li>
<li>Familiarity with data pipeline tools</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Python, ML systems, deep learning frameworks, MLOps tooling, cloud platforms, GenAI/LLMs, Linux/Unix, communication and presentation skills, robotics, TypeScript, Fastai, scikit-learn, XGBoost, LightGBM, data engineering, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. It became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4651106006</Applyto>
      <Location>Livingston, NJ / New York, NY / Philadelphia, PA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e2f537b7-0f0</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems with the Databricks Data Intelligence Platform.</p>
<p>As a Delivery Solutions Architect (DSA), you are a trusted technical advisor to key customers, providing expert guidance that translates data, analytics, and AI challenges into high-impact business value.</p>
<p>You help design, implement, and scale data and AI solutions, focusing on architecture, operational excellence, and customer enablement.</p>
<p>Internally, you will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks Platform in your customers.</p>
<p>Delivery Solutions Architects (DSAs) are trusted technical advisors embedded within the customer organization, providing expert guidance that translates data and AI challenges into high-impact business value.</p>
<p>They help you design, implement, and scale data and AI solutions, focusing on architecture, operational excellence, and team enablement.</p>
<p>DSAs focus on:</p>
<ul>
<li>Designing secure, scalable architecture</li>
</ul>
<ul>
<li>Aligning people, processes, and technology</li>
</ul>
<ul>
<li>Establishing trusted advisor relationships</li>
</ul>
<ul>
<li>Leveraging the broader ecosystem of Databricks experts</li>
</ul>
<p>This is a hybrid technical and commercial role.</p>
<p>Technically, the expectations are that you become the post-sales technical lead and trusted advisor across all Databricks products for the customer&#39;s top priority use cases.</p>
<p>This requires you to use your technical skills and credibility to engage and communicate with technical/technical leadership stakeholders in our customer organizations, do architecture reviews, help with performance and cost optimizations, demonstrate new capabilities, remove blockers, etc.</p>
<p>In parallel, it is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestrating other focused/specialized teams within Databricks, and creating and driving onboarding plans.</p>
<p>While not a hands-on-keyboard role, this is a highly technical position where architectural skills in fields such as Data Architecture, Data Engineering, Data Warehousing, or Data Science are essential.</p>
<p>You will report directly to a DSA Manager within the Field Engineering organization.</p>
<p>The impact you will have:</p>
<ul>
<li>Be the Databricks Architect working with customer technical teams working on use cases/data products, from development to go-live, addressing any technical challenges and blockers and providing guidance, best practices, and enablement</li>
</ul>
<ul>
<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
</ul>
<ul>
<li>Be the internal point of contact for any questions related to production/go live status of agreed-upon use cases within an account, often for multiple use cases within the largest and most complex organizations</li>
</ul>
<ul>
<li>Leverage both Shared Services, User Education, Onboarding/Technical Services, and Support resources, along with escalating to expert-level technical teams to address the tasks that are beyond your scope of activities or expertise</li>
</ul>
<ul>
<li>Create and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
</ul>
<ul>
<li>Navigate Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs</li>
</ul>
<ul>
<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<ul>
<li>Main use cases moving from &#39;win&#39; to production</li>
</ul>
<ul>
<li>Enablement/user growth plan</li>
</ul>
<ul>
<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>
</ul>
<ul>
<li>Organic needs for current investment (e.g., cloud cost control, tuning &amp; optimization)</li>
</ul>
<ul>
<li>Executive and operational governance</li>
</ul>
<ul>
<li>Provide internal and external updates</li>
</ul>
<ul>
<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption, and use case progression</li>
</ul>
<ul>
<li>to your Technical GM</li>
</ul>
<ul>
<li>Navigate Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs, presenting them to the customers when applicable for their ongoing developments</li>
</ul>
<ul>
<li>internal and external updates</li>
</ul>
<ul>
<li>KPI reporting on the status of usage and customer health, risks, and blockers, product adoption, and use case progression</li>
</ul>
<ul>
<li>to your Field Engineering leadership</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6-10 years of experience where you have been accountable for delivery of projects in Data, Analytics, or AI and where you can contribute to technical debate and design choices with customers</li>
</ul>
<ul>
<li>Programming experience in PySpark, SQL, or Scala</li>
</ul>
<ul>
<li>Understanding and hands-on experience of solution architecture-related distributed data and analytics systems</li>
</ul>
<ul>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting roles</li>
</ul>
<ul>
<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>
</ul>
<ul>
<li>Technical program coordination including account and stakeholder management</li>
</ul>
<ul>
<li>Experience resolving complex and important escalation with senior customer technical stakeholders</li>
</ul>
<ul>
<li>Track record of overachievement against quota, goals, or similar objective targets</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
</ul>
<ul>
<li>Can travel up to 30%</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics, and AI.</p>
<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake, and MLflow.</p>
<p>To learn more, follow Databricks on Twitter, LinkedIn, and Facebook.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p>Compliance</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>PySpark, SQL, Scala, Data Architecture, Data Engineering, Data Warehousing, Data Science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a platform for unifying and democratizing data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8368003002</Applyto>
      <Location>Remote - Italy</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>85f1f87e-70f</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
</ul>
<ul>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461327002</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ffd169d9-40b</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, data platforms &amp; analytics, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified data intelligence platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461239002</Applyto>
      <Location>Atlanta, Georgia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>403305ef-ce3</externalid>
      <Title>Senior Product Marketing Manager</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. Since 2016, we’ve grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases. As of February 2025, we’ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers, including AstraZeneca, Sky, Nasdaq, Volvo, JetBlue, and SafetyCulture. We’re backed by top-tier investors including Andreessen Horowitz, Sequoia Capital, and Altimeter.</p>
<p>As a Senior Product Marketing Manager at dbt Labs, you will play a pivotal role in shaping the market perception of dbt,the industry standard that helps organisations build, manage, and analyse data at scale for analytics and AI,building compelling narratives, driving product adoption, and shaping how our products are used and understood by our enterprise customers. This role is perfect for someone who’s as comfortable digging into the details of dbt features and our users’ needs as they are with telling a big-picture story about where our platform and industry are headed.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Drive the go-to-market strategy for dbt, and specifically features that position dbt for AI workflows and use cases, working closely with Product, Marketing, and Field teams to identify and address target buyer personas, challenges, and opportunities.</li>
<li>Craft product positioning and messaging with clear, differentiated positioning for dbt, grounded in customer insights, market context, and roadmapped innovations.</li>
<li>Develop and execute marketing campaigns that clearly articulate the value and unique advantages of dbt.</li>
<li>Collaborate closely with Product and Engineering to shape product strategy, using competitive, customer, and market insights to influence our product roadmap.</li>
<li>Create and maintain a core bill of materials including web content, messaging guides, solutions briefs, product videos, pitch decks, email sequences, and internal enablement materials for the dbt.</li>
<li>Develop and execute product launch and campaign strategies that drive pipeline for net new customer and expansion opportunities.</li>
<li>Create and maintain internal sales tools like playbooks that reflect a high degree of customer understanding and empathy.</li>
<li>Work with Revenue Marketing teams to ensure strategic narratives and product messaging are delivered consistently at every touchpoint.</li>
<li>Measure and report on the effectiveness of marketing campaigns, using insights to drive continuous improvement.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>4+ years of experience in product marketing, preferably within the data analytics domain.</li>
<li>A proven track record of developing and executing successful marketing campaigns for B2B software products.</li>
<li>Strong understanding of the data analytics industry and its challenges, with the ability to translate complex concepts into clear, compelling messaging.</li>
<li>Exceptional written and verbal communication skills, with a talent and passion for storytelling.</li>
<li>Experience working with cross-functional teams, demonstrating strong leadership and collaborative abilities.</li>
<li>Demonstrated analytical skills, with the ability to leverage data to inform marketing strategies and decisions.</li>
</ul>
<p><strong>Preferred upcoming</strong></p>
<ul>
<li>Prior experience working at a data analytics, AI, and/or open source software company</li>
<li>Familiarity with the modern analytics stack, and a strong grasp of market dynamics within it</li>
<li>Experience as an end-user of dbt, and/or experience as a data analyst, data engineer, or data scientist</li>
</ul>
<p><strong>Compensation &amp; Benefits</strong></p>
<ul>
<li>Annual Salary: $150,000 - $200,000 USD</li>
<li>Equity Stake</li>
<li>Benefits</li>
</ul>
<p><strong>dbt Labs offers:</strong></p>
<ul>
<li>Unlimited vacation (and yes we use it!)</li>
<li>401k</li>
<li>Pension Plan</li>
<li>16 weeks Paid Parental Leave</li>
<li>Wellness stipend</li>
<li>Home office stipend, and more!</li>
</ul>
<p>*Equity or comparable benefits may be offered depending on the legal limitations</p>
<p><strong>What to expect in the hiring process</strong></p>
<ul>
<li>Interview with Talent Acquisition Partner</li>
<li>Interview with Hiring Manager</li>
<li>Team Interviews</li>
<li>Final Round Values Interview</li>
</ul>
<p><strong>dbt Labs is an equal opportunity employer, committed to building an inclusive team that welcomes diverse perspectives, backgrounds, and experiences.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000 - $200,000 USD</Salaryrange>
      <Skills>product marketing, data analytics, AI, open source software, marketing campaigns, customer insights, market context, roadmapped innovations, web content, messaging guides, solutions briefs, product videos, pitch decks, email sequences, internal enablement materials, product launch, campaign strategies, pipeline, net new customer, expansion opportunities, internal sales tools, playbooks, customer understanding, empathy, Revenue Marketing, strategic narratives, product messaging, consistency, touchpoints, effectiveness, insights, continuous improvement, prior experience working at a data analytics, AI, and/or open source software company, familiarity with the modern analytics stack, strong grasp of market dynamics within it, experience as an end-user of dbt, experience as a data analyst, data engineer, or data scientist</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4672159005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c4c287b9-c7d</externalid>
      <Title>Engineering Manager, GTM Engineering</Title>
      <Description><![CDATA[<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets. Our platform combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</p>
<p>As an Engineering Manager, GTM Engineering, you will lead the engineering team responsible for Brex&#39;s GTM Engineering surfaces, enabling our growth engine across Marketing, Sales, and self-serve funnels. This role focuses on building and optimizing our marketing website (Brex.com), GTM applications, top-of-funnel experiences, and AI-powered systems that increase efficiency, reduce CAC, and improve sales and marketing effectiveness.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and mentor a high-performing team of product engineers, fostering their career development through coaching, feedback, and hands-on guidance.</li>
<li>Drive the architectural vision, technical roadmap, and project execution for Brex.com, GTM applications, and top-of-funnel growth systems, ensuring scalability, performance, and security.</li>
<li>Champion and integrate AI-native solutions within our Marketing, Sales, and Operations workflows to drive efficiency and unlock new capabilities.</li>
<li>Operate at all levels, guiding your team through complex technical challenges while staying close to the code and contributing to design.</li>
<li>Partner with stakeholders across Marketing, Sales, and Growth, acting as a strategic advisor to translate business needs into a prioritized engineering backlog while being jointly accountable to business metrics such as CAC, payback, and conversion rates.</li>
<li>Align ad-hoc requests to broader business strategy, ensuring the team is focused on the most impactful work and confidently declining projects that are not strategically aligned.</li>
<li>Own the operational excellence of your team, managing sprint capacity, removing blockers, and ensuring high-velocity, high-quality delivery.</li>
<li>Establish and enforce engineering best practices for GTM applications and growth surfaces, including CI/CD, source control, code quality, observability, and system governance.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field.</li>
<li>6+ years of software engineering experience with strong technical depth.</li>
<li>3+ years of experience managing or leading engineers in a high-growth environment.</li>
<li>Experience architecting and building growth surfaces or marketing engineering systems.</li>
<li>Strong frontend, data, and backend technical fundamentals, with experience in modern frameworks.</li>
<li>Experience with GTM systems, marketing automation tools, experimentation platforms, or analytics instrumentation.</li>
<li>Excellent interpersonal and relationship-building skills with the ability to manage and communicate effectively with XFN partners in Sales and Marketing at all levels.</li>
<li>A growth-hacking, AI-native mindset with a proven ability to design and execute GTM strategies that drive meaningful revenue impact.</li>
</ul>
<p>Bonus points:</p>
<ul>
<li>Experience managing remote or distributed engineering teams.</li>
<li>Experience with B2B growth.</li>
<li>You have started your own technology venture or were a foundational engineering member of an early-stage start-up.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$240,000 - $300,000</Salaryrange>
      <Skills>Software engineering, Technical leadership, GTM systems, Marketing automation tools, Experimentation platforms, Analytics instrumentation, Frontend development, Backend development, Data engineering, Cloud computing, AI-native solutions, Growth hacking, B2B growth, Remote team management, Distributed engineering teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial platform that provides corporate cards and banking services to companies worldwide.</Employerdescription>
      <Employerwebsite>https://brex.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8367549002</Applyto>
      <Location>New York, New York, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bac99a46-7f5</externalid>
      <Title>Resident Solutions Architect - Communications, Media, Entertainment &amp; Games</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461243002</Applyto>
      <Location>Denver, Colorado</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>760c3e88-e35</externalid>
      <Title>Senior Product Manager, Data</Title>
      <Description><![CDATA[<p>Job Title: Senior Product Manager, Data</p>
<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>
<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>
<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>
<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>
<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>
<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>
<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>
<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>
<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>
<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>
<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>
<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>
<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>
<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>
<li>Awareness of data security, compliance, and governance best practices</li>
<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>Salary Range: $143,000 to $210,000</p>
<p>Benefits:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Workplace:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$143,000 to $210,000</Salaryrange>
      <Skills>data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud-based platform that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649824006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>185fad24-e93</externalid>
      <Title>Pre-sales Manager Nordics - Digital Natives &amp; Startups</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Pre-sales Manager Nordics to join our team in Stockholm. As a Pre-sales Manager, you will be responsible for guiding customers through competitive landscapes, best practices, and implementation strategies. You will also provide technical leadership to help customers understand how Databricks can solve their business problems.</p>
<p>Your key responsibilities will include:</p>
<ul>
<li>Technical Leadership: Guide customers through competitive landscapes, best practices, and implementation strategies.</li>
<li>Team Leadership &amp; Mentorship: Lead, build, and mentor a high-performing team of solution architects.</li>
<li>Customer Engagement &amp; Strategy: Partner closely with sales teams to develop account strategies and align on plans that help our customers to achieve their business goals.</li>
</ul>
<p>To be successful in this role, you will need to have 8+ years of experience in a technical customer-facing role, managing C-level technical and business relationships with complex global organizations. You will also need to have 2+ years of experience leading technical pre-sales teams with a demonstrated ability to hire, develop, and manage technical teams.</p>
<p>We offer a competitive salary and benefits package, as well as opportunities for professional growth and development. If you&#39;re passionate about data and AI and want to join a dynamic team, please apply!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>technical leadership, solution architecture, customer engagement, sales strategy, team management, data and AI, cloud computing, machine learning, data engineering, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8486484002</Applyto>
      <Location>Stockholm, Sweden</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4e6e79bb-e0c</externalid>
      <Title>Senior Data Scientist</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Scientist to play a key role in Medium&#39;s data science practice, delivering rigorous analysis and predictive modeling that inform product and business decisions.</p>
<p>As a member of Medium’s Machine Learning &amp; Insights team, you’ll partner closely with stakeholders across teams to help deepen our collective understanding of Medium’s members, writers, and business through data.</p>
<p>You&#39;ll work alongside our Principal Scientist, contributing methodological rigor to strategic initiatives while owning end-to-end research and model development in your domain.</p>
<p>This is a unique role for someone with a track record of solving big, ambiguous problems at the intersection of data, product, and business strategy.</p>
<p>You’ll do more than ivory-tower modeling; you’ll help us define what “content quality” looks like, design better experiments, and ship real product changes to users.</p>
<p>If you love both statistical rigor and real-world business impact, this might be the role for you!</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Proactively identify valuable areas of investigation that help deepen our understanding of our members, writers, and overall business.</li>
</ul>
<ul>
<li>Partner with diverse technical and non-technical stakeholders across the company to develop hypotheses and generate actionable insights in their domains.</li>
</ul>
<ul>
<li>Work with executives at Medium, including our CEO, to model and present data insights and findings.</li>
</ul>
<ul>
<li>Build and maintain statistical and predictive models about Medium’s business.</li>
</ul>
<ul>
<li>Run research projects and investigations, small and large, for leadership and cross-functional partners.</li>
</ul>
<ul>
<li>Develop and maintain quantitative models that support forecasting and strategic planning.</li>
</ul>
<ul>
<li>Share knowledge with and mentor engineers and other stakeholders to improve their own analytics capabilities.</li>
</ul>
<ul>
<li>Contribute to the broader data culture and ecosystem at Medium, helping to raise our data fluency as a team.</li>
</ul>
<ul>
<li>Attend Medium’s twice-yearly, in-person offsites (hosted in locations around the U.S.).</li>
</ul>
<p><strong>Skills, Knowledge and Expertise</strong></p>
<ul>
<li>You know your way around data. You have 4-6 years of experience as an in-house data scientist, with a proven track record of driving business impact through data.</li>
</ul>
<ul>
<li>You&#39;re highly proficient in statistical programming with either Python or R, and you’re comfortable writing SQL for analytical queries. (Python skills are strongly preferred. Our team uses Python extensively, and we’ll be expecting candidates to demonstrate Python scripting skills during the interview process.)</li>
</ul>
<ul>
<li>You have a track record of building, validating, and deploying predictive and statistical models that drove measurable business outcomes.</li>
</ul>
<ul>
<li>You&#39;re a strong collaborator with an established history of cross-team and executive-level partnership.</li>
</ul>
<ul>
<li>You care about quality writing, informed readership, and building a sustainable model for creators. Experience applying modeling techniques to problems unique to social platforms, subscription/membership businesses, or publishing is a plus.</li>
</ul>
<ul>
<li>Experience with ML engineering practices, dbt, or data engineering is a plus! We&#39;re a small team, and the folks who do best are those who like to wear many hats.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>In addition to the new skills you&#39;ll pick up, here&#39;s what else you&#39;ll enjoy by working at Medium:</p>
<ul>
<li>Working with a fully distributed team: We’re fully remote and have teammates across the U.S. &amp; France.</li>
</ul>
<ul>
<li>Healthcare benefits covered at 100% for employees and 70% for dependents.</li>
</ul>
<ul>
<li>Generous parental leave policy.</li>
</ul>
<ul>
<li>Mental health support through Talkspace.</li>
</ul>
<ul>
<li>Financial wellness support through Northstar.</li>
</ul>
<ul>
<li>Stipends for co-working, professional development, wifi, and a one-time home office bonus.</li>
</ul>
<ul>
<li>Unlimited PTO and standard company holidays.</li>
</ul>
<ul>
<li>A discounted Medium membership!</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data science, statistical programming, Python, R, SQL, predictive modeling, statistical modeling, machine learning, data engineering, ML engineering practices, dbt</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Medium</Employername>
      <Employerlogo>https://logos.yubhub.co/medium.com.png</Employerlogo>
      <Employerdescription>Medium is a platform for reading and writing on the internet.</Employerdescription>
      <Employerwebsite>https://medium.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/medium/jobs/4192878009</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6e6544bc-9bc</externalid>
      <Title>Staff Machine Learning Engineer, Listings and Host Tools Data and AI</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Machine Learning Engineer to join our Listings and Host Tools Data and AI team. As a member of this team, you will support host personalization products and provide data-driven solutions to achieve a superior host experience on Airbnb.</p>
<p>The Listings and Host Tools Data and AI team owns data pipelines and ML models and builds services for serving that are used in the above areas. We leverage open source, third-party, and homegrown ML models to improve the Host and Guest experience.</p>
<p>As an ML engineer, you will partner closely with our data science, product partners, and other ML + data engineers on the team to execute on these opportunities in order to improve the Host and Guest product experience on Airbnb.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Working with large-scale structured and unstructured data to build and continuously improve cutting-edge Machine Learning models for Airbnb product, business, and operational use cases.</li>
</ul>
<ul>
<li>Collaborating with cross-functional partners, including software engineers, product managers, operations, and data scientists, to identify opportunities for business impact, understand, refine, and prioritize requirements for machine learning models, drive engineering decisions, and quantify impact.</li>
</ul>
<ul>
<li>Prototyping machine learning use cases for use in the product and working with stakeholders to iterate on requirements.</li>
</ul>
<ul>
<li>Developing, productionizing, and operating Machine Learning models and pipelines at scale, including both batch and real-time use cases.</li>
</ul>
<ul>
<li>Designing and building services and APIs to enable serving ML model-driven data to product use cases.</li>
</ul>
<p>We&#39;re looking for someone with 8+ years of industry experience in applied Machine Learning, including a Master&#39;s or Ph.D. in a relevant field. You should have experience in both Natural Language Processing and Computer Vision, as well as strong programming and data engineering skills.</p>
<p>You should also have a deep understanding of Machine Learning best practices, algorithms, and domains, as well as experience with technologies such as TensorFlow, PyTorch, Kubernetes, Spark, Airflow, and data warehouses.</p>
<p>If you&#39;re passionate about building end-to-end Machine Learning infrastructure and productionizing Machine Learning models, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$204,000-$255,000 USD</Salaryrange>
      <Skills>Machine Learning, Natural Language Processing, Computer Vision, Programming, Data Engineering, TensorFlow, PyTorch, Kubernetes, Spark, Airflow, Data Warehouses</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest online marketplaces for unique stays and experiences.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7454348</Applyto>
      <Location>Remote-USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>26f523c0-bbd</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494154002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e559797c-6e1</externalid>
      <Title>Engineering Manager, GTM Engineering</Title>
      <Description><![CDATA[<p><strong>Job Title</strong></p>
<p>Engineering Manager, GTM Engineering</p>
<p><strong>Job Description</strong></p>
<p>Join us at Brex, the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. Our platform combines global corporate cards and banking with intuitive spend management, bill pay, and travel software. This enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p><strong>About Brex</strong></p>
<p>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets. It provides global corporate cards and banking combined with intuitive spend management, bill pay, and travel software.</p>
<p><strong>What You&#39;ll Do</strong></p>
<p>You will lead the engineering team responsible for Brex&#39;s GTM Engineering surfaces, enabling our growth engine across Marketing, Sales, and self-serve funnels. This role focuses on building and optimizing our marketing website (Brex.com), GTM applications, top-of-funnel experiences, and AI-powered systems that increase efficiency, reduce CAC, and improve sales and marketing effectiveness.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead and mentor a high-performing team of product engineers, fostering their career development through coaching, feedback, and hands-on guidance.</li>
<li>Drive the architectural vision, technical roadmap, and project execution for Brex.com, GTM applications, and top-of-funnel growth systems, ensuring scalability, performance, and security.</li>
<li>Champion and integrate AI-native solutions within our Marketing, Sales, and Operations workflows to drive efficiency and unlock new capabilities.</li>
<li>Operate at all levels, guiding your team through complex technical challenges while staying close to the code and contributing to design.</li>
<li>Partner with stakeholders across Marketing, Sales, and Growth, acting as a strategic advisor to translate business needs into a prioritized engineering backlog while being jointly accountable to business metrics such as CAC, payback, and conversion rates.</li>
<li>Align ad-hoc requests to broader business strategy, ensuring the team is focused on the most impactful work and confidently declining projects that are not strategically aligned.</li>
<li>Own the operational excellence of your team, managing sprint capacity, removing blockers, and ensuring high-velocity, high-quality delivery.</li>
<li>Establish and enforce engineering best practices for GTM applications and growth surfaces, including CI/CD, source control, code quality, observability, and system governance.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>
<li>6+ years of software engineering experience with strong technical depth.</li>
<li>3+ years of experience managing or leading engineers in a high-growth environment.</li>
<li>Experience architecting and building growth surfaces or marketing engineering systems.</li>
<li>Strong frontend, data, and backend technical fundamentals, with experience in modern frameworks.</li>
<li>Experience with GTM systems, marketing automation tools, experimentation platforms, or analytics instrumentation.</li>
<li>Excellent interpersonal and relationship-building skills with the ability to manage and communicate effectively with XFN partners in Sales and Marketing at all levels.</li>
<li>A growth-hacking, AI-native mindset with a proven ability to design and execute GTM strategies that drive meaningful revenue impact.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience managing remote or distributed engineering teams.</li>
<li>Experience with B2B growth.</li>
<li>You have started your own technology venture or were a foundational engineering member of an early-stage start-up.</li>
</ul>
<p><strong>Compensation</strong></p>
<p>The expected salary range for this role is $240,000 - $300,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$240,000 - $300,000</Salaryrange>
      <Skills>Software engineering, Technical leadership, GTM systems, Marketing automation tools, Experimentation platforms, Analytics instrumentation, Frontend development, Backend development, Data engineering, Cloud computing, Business development, Growth hacking, AI-native solutions, Machine learning, Data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets.
It provides global corporate cards and banking combined with intuitive spend management, bill pay, and travel software.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8367552002</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3d57b93e-423</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, data architecture, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456948002</Applyto>
      <Location>Atlanta, Georgia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>19d143c9-cac</externalid>
      <Title>Data Analytics Engineer</Title>
      <Description><![CDATA[<p>Our mission is to bring web3 to a billion people by providing builders with the tools they need to build exceptional onchain products. As a Data Analytics Engineer, you will be the data layer for the entire company, designing clean, trusted datasets that power our AI tooling and ensuring every team can make decisions from a single source of truth.</p>
<p>You will build and own the canonical data models in Snowflake that serve as Alchemy&#39;s company-wide source of truth, structure datasets so vendor AI tools perform optimally out of the box, and explore and prototype MCP integrations that let internal teams query data conversationally.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building and owning the canonical data models in Snowflake that serve as Alchemy&#39;s company-wide source of truth</li>
<li>Structuring datasets so vendor AI tools perform optimally out of the box</li>
<li>Exploring and prototyping MCP integrations that let internal teams query data conversationally</li>
<li>Eliminating shadow tables and one-off datasets by proactively serving team data needs at the platform level</li>
</ul>
<p>Requirements include 6+ years in data engineering with strong SQL and deep Snowflake expertise, experience designing efficient, scalable analytical data models, and proficiency with dbt or comparable transformation frameworks.</p>
<p>Benefits include medical, dental, and vision coverage, gym reimbursement, home office build-out budget, in-office group meals, commuter benefits, flexible time off, wellbeing and mental health perks, learning and development stipend, company-sponsored conferences and events, HSA and FSA plans, and fertility benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$200,000 - $240,000 annually</Salaryrange>
      <Skills>data engineering, SQL, Snowflake, dbt, MCP, AI tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Alchemy</Employername>
      <Employerlogo>https://logos.yubhub.co/alchemy.com.png</Employerlogo>
      <Employerdescription>Alchemy provides tools for building onchain products and powers 70% of top web3 teams.</Employerdescription>
      <Employerwebsite>https://www.alchemy.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/alchemy/jobs/4677021005</Applyto>
      <Location>New York, New York, United States, San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7ba4251-36b</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>Job Title: Resident Solutions Architect - Public Sector</p>
<p>We are seeking a highly skilled Resident Solutions Architect to join our Professional Services team in Washington, D.C. As a Resident Solutions Architect, you will work with customers on short to medium-term customer engagements on their big data challenges using the Databricks platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
<li>Provide an escalated level of support for customer operational issues</li>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>Requirements:</p>
<ul>
<li>US Top Secret Clearance Required this position</li>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines</li>
<li>Documentation and white-boarding skills</li>
<li>Experience working with clients and managing conflicts</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD Zone 2 Pay Range $180,656-$248,360 USD Zone 3 Pay Range $180,656-$248,360 USD Zone 4 Pay Range $180,656-$248,360 USD</p>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p>Compliance</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis aloneabled</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems, Apache Spark, CI/CD, MLOps, end-to-end data architectures, technical project delivery, scope and timelines, documentation and white-boarding, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8356289002</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fdfe04d9-2a3</externalid>
      <Title>Sr. Data Scientist - Capacity Data</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a Sr. Data Scientist to lead the transition to a centralized, production-grade database system. The successful candidate will design, build, and maintain robust end-to-end data pipelines and transformations that aggregate data from across the organization. They will also enable advanced analytics, support AI agent development, and cross-functional synthesis.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architect scalable pipelines to unify disparate metrics into a single, accurate source of truth</li>
<li>Enable advanced analytics through real-time and near-real-time dashboards for operational capacity and demand managers</li>
<li>Support AI agent development by ensuring data pipelines are architected to synthesize complex data and inform high-stakes infrastructure decisions</li>
<li>Cross-functional synthesis with Data Center Operations, Supply Chain, Product, and Finance to ensure technical pipelines accurately reflect the physical realities of data center bring-up and node health</li>
</ul>
<p>The ideal candidate will have a Master&#39;s or PhD in Computer Science, Computer Engineering, Operations Research, or a related quantitative field, with 6+ years of professional experience in data engineering or data science. They will have expert-level proficiency in Python and SQL, and experience with modern data stack tools and software development best practices.</p>
<p>In addition to a competitive salary, CoreWeave offers a variety of benefits, including medical, dental, and vision insurance, company-paid life insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible spending account, health savings account, tuition reimbursement, employee stock purchase program, mental wellness benefits, family-forming support, paid parental leave, flexible PTO, catered lunch, and a casual work environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$143,000 to $210,000</Salaryrange>
      <Skills>Python, SQL, Modern data stack tools, Software development best practices, Data engineering, Data science, Domain knowledge of AI infrastructure, Background in high-complexity environments, Decision science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4663051006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b4c623fd-61c</externalid>
      <Title>Engineering Manager, GTM Engineering</Title>
      <Description><![CDATA[<p>Join us as an Engineering Manager, GTM Engineering</p>
<p>At Brex, we&#39;re the intelligent finance platform that enables companies to spend smarter and move faster. Our platform combines global corporate cards and banking with intuitive spend management, bill pay, and travel software. We&#39;re looking for an experienced Engineering Manager to lead our GTM Engineering team, responsible for building and optimising our marketing website (Brex.com), GTM applications, top-of-funnel experiences, and AI-powered systems.</p>
<p>As an Engineering Manager, you will lead a high-performing team of product engineers, driving the architectural vision, technical roadmap, and project execution for Brex.com, GTM applications, and top-of-funnel growth systems. You will champion and integrate AI-native solutions within our Marketing, Sales, and Operations workflows to drive efficiency and unlock new capabilities.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and mentor a high-performing team of product engineers, fostering their career development through coaching, feedback, and hands-on guidance.</li>
<li>Drive the architectural vision, technical roadmap, and project execution for Brex.com, GTM applications, and top-of-funnel growth systems, ensuring scalability, performance, and security.</li>
<li>Champion and integrate AI-native solutions within our Marketing, Sales, and Operations workflows to drive efficiency and unlock new capabilities.</li>
<li>Operate at all levels, guiding your team through complex technical challenges while staying close to the code and contributing to design.</li>
<li>Partner with stakeholders across Marketing, Sales, and Growth, acting as a strategic advisor to translate business needs into a prioritised engineering backlog while being jointly accountable to business metrics such as CAC, payback, and conversion rates.</li>
<li>Align ad-hoc requests to broader business strategy, ensuring the team is focused on the most impactful work and confidently declining projects that are not strategically aligned.</li>
<li>Own the operational excellence of your team, managing sprint capacity, removing blockers, and ensuring high-velocity, high-quality delivery.</li>
<li>Establish and enforce engineering best practices for GTM applications and growth surfaces, including CI/CD, source control, code quality, observability, and system governance.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field.</li>
<li>6+ years of software engineering experience with strong technical depth.</li>
<li>3+ years of experience managing or leading engineers in a high-growth environment.</li>
<li>Experience architecting and building growth surfaces or marketing engineering systems.</li>
<li>Strong frontend, data, and backend technical fundamentals, with experience in modern frameworks.</li>
<li>Experience with GTM systems, marketing automation tools, experimentation platforms, or analytics instrumentation.</li>
<li>Excellent interpersonal and relationship-building skills with the ability to manage and communicate effectively with XFN partners in Sales and Marketing at all levels.</li>
<li>A growth-hacking, AI-native mindset with a proven ability to design and execute GTM strategies that drive meaningful revenue impact.</li>
</ul>
<p>Bonus points:</p>
<ul>
<li>Experience managing remote or distributed engineering teams.</li>
<li>Experience with B2B growth.</li>
<li>You have started your own technology venture or were a foundational engineering member of an early-stage start-up. We value entrepreneurial spirit &amp; scrappiness!</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $240,000 CAD - $300,000 CAD. However, the starting base pay will depend on a number of factors including the candidate&#39;s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$240,000 CAD - $300,000 CAD</Salaryrange>
      <Skills>Software engineering, Technical leadership, GTM systems, Marketing automation tools, Experimentation platforms, Analytics instrumentation, Frontend development, Backend development, Data engineering, Cloud computing, AI-native solutions, Growth hacking, B2B growth, Remote team management, Entrepreneurial spirit</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial platform that provides corporate cards and banking services to companies across over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8367553002</Applyto>
      <Location>Vancouver, British Columbia, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cbd81d47-d7e</externalid>
      <Title>Data Platform Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>We&#39;re hiring for multiple roles within our Professional Services team. This position may be offered as Senior Solutions Consultant, Resident Solutions Architect, or Senior Resident Solutions Architect. The final title will align to your experience, technical depth, and customer-facing ownership.</p>
<p>As a Big Data Solutions Architect (Internal Title - Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Design and deployment of performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
<li>Travel to customers 10% of the time</li>
</ul>
<p>[Preferred] Databricks Certification but not essential</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8486738002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>219928ef-6de</externalid>
      <Title>Resident Solutions Architect - Healthcare &amp; Life Sciences</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipated utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8494148002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0c456364-565</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>As a Delivery Solutions Architect at Databricks, you will be a trusted technical advisor embedded within the customer organisation. You will work closely with sales and field engineering to accelerate adoption and growth of the Databricks platformقت You will ensure customer success by providing technical accountability for our most complex customers,helping them maximise the value of Databricks workloads they have already selected and improving their return on investment.</p>
<p>This role blends deep technical leadership with strategic customer engagement. You will own the post-sales technical strategy for the customer’s highest-value use cases and serve as their primary advisor across the Databricks platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Being the accountable Databricks Architect for your assigned customers, working with technical teams to guide priority use cases from design through go-live,removing blockers, providing best practices, and ensuring stable, scalable adoption.</li>
<li>Leading the post-technical-win strategy and execution plan for major Databricks use cases, aligning with Solutions Architects to understand full demand plans and drive clarity across multiple selling teams and stakeholders.</li>
<li>Owning the technical leadership of assigned use cases, creating certainty from ambiguity and coordinating onboarding, enablement, success, go-live, and healthy consumption of workloads selected for Databricks.</li>
<li>Serving as the first point of contact for production/go-live status, often across multiple complex use cases within large enterprise organisations.</li>
<li>Orchestrating the broader Databricks ecosystem,Shared Services, User Education, Onboarding/Technical Services, Support, and specialist technical teams,to ensure high-quality delivery and escalate advanced issues when needed.</li>
<li>Creating and executing a point of view for accelerating use cases into production, collaborating with Professional Services on proposals as needed.</li>
<li>Partnering with Product and Engineering to introduce new capabilities, private previews, and upgrade paths that support customer roadmaps.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Programming experience in Python, SQL, or Scala, and a solid understanding of distributed data systems.</li>
<li>5+ years of experience delivering Data, Analytics, or AI projects, with the ability to contribute to architectural discussions with customers.</li>
<li>Experience in customer-facing technical roles such as technical architecture, pre-sales, consulting, or customer success.</li>
<li>Ability to guide architectural decisions in domains such as data engineering, data architecture, data warehousing, or data science.</li>
<li>Demonstrated ability to drive delivery outcomes without hands-on keyboard responsibilities.</li>
<li>Experience resolving complex escalations with senior customer stakeholders.</li>
<li>Understanding of how to connect technical deliverables to business value.</li>
<li>Track record of achieving or exceeding goals or objectives.</li>
<li>Bachelor’s degree in Computer Science, Information Systems, Engineering, or equivalent experience.</li>
<li>Fluency in English is required; French or German language skills are a plus.</li>
<li>Ability to travel up to 30%.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Distributed data systems, Data engineering, Data architecture, Data warehousing, Data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It has over 10,000 customers worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8309177002</Applyto>
      <Location>Zürich, Switzerland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fd1da18e-84d</externalid>
      <Title>Principal Software Engineer II - Observability</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join the Observability Experience Team as one of the Tech Leads. As part of this team, you will work at the intersection of big data engineering, backend architecture, and experiences to help users obtain the best insights from their Observability signals, especially logs, metrics, and traces.</p>
<p>Key responsibilities include collaborating with product management, product design, and multiple teams across Elastic to define and evolve the end-to-end experiences for Observability. You will also be a contact point for other teams within Elastic, providing hands-on support and guidance. Additionally, you will help the team define coding practices and standards, foster a culture of mutual respect, collaboration, and consensus-based decision-making, and stay true to the principles of software development as adopted by the team.</p>
<p>The ideal candidate will have experience leading technical projects in the data and enterprise architecture areas, with a proven knowledge in building and running sophisticated technical infrastructures and engineering sound software systems. They should also have hands-on experience using and developing Observability tools, preferably in the Logs space, and experience mentoring expert engineers, providing technical and professional guidance. Furthermore, they should be able to define a long-term technical vision for an area of a data-intensive application, working across teams and organizations to collaboratively build the technical roadmap.</p>
<p>Bonus points for experience as a user of the Elastic Stack and experience in SRE roles.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Observability tools, Logs space, Big data engineering, Backend architecture, Experiences, Elastic Stack, SRE roles</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic, the Search AI Company</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7635297</Applyto>
      <Location>Greece</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8efd6b3b-251</externalid>
      <Title>Resident Solutions Architect - Public Sector</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>
</ul>
<ul>
<li>Travel to customers 20% of the time</li>
</ul>
<p>Databricks Certification</p>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8456973002</Applyto>
      <Location>Boston, Massachusetts</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fe0d53c0-05e</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Lakehouse platform. As a Delivery Solutions Architect (DSA), you will play a critical role during this journey. The DSA works across a small number of our largest or highest potential key accounts, collaborating across Databricks teams to accelerate the adoption and growth of the Databricks platform.</p>
<p>As a DSA, you will help ensure customer success by driving focus and technical accountability to our most complex customers who need guidance to accelerate consumption on Databricks workloads that they have already selected. This is a hybrid technical and commercial role. It is commercial in the sense that you will be required to own and drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, owning executive relationships and creating and driving plans and strategies for Databricks colleagues to execute upon.</p>
<p>This is in parallel to being technical, with expectations being that you become at least Level 200 across all Databricks products/workloads and that you become the Use Case-specific technical lead post Technical Win. You will bring strong executive relationship management skills and high levels of technical credibility to effectively engage and communicate at all levels with an organization, in particular with a track record of building strong relationships with the customers&#39; executives and C-suite, elevating the conversation, and helping them realize the value of Databricks.</p>
<p>You will report directly to a Director, Field Engineering, as part of your Business Unit&#39;s Technical GM organization. You will play a key role in establishing the fundamental assets and best practices within the DSA team, mentoring other DSAs and wider account team members within your region, helping them develop personally, professionally and to further their careers.</p>
<p>The impact you will have:</p>
<ul>
<li>Engage with the Solutions Architect to understand the full Use Case Demand Plan for prioritized customers.</li>
<li>Own the Post-Technical Win technical account strategy and investment plan for the majority of Databricks Use Cases within our most strategic accounts.</li>
<li>Be the accountable technical leader assigned to specific Use Cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty/ambiguity and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks.</li>
<li>Be the first point of contact for any technical issues or questions related to production/go live status of agreed upon Use Cases within an account, oftentimes services multiple use cases within the largest and most complex organizations.</li>
<li>Leverage both Shared Services of User Education, Onboarding/Technical Services and Support resources, along with escalating to Level 400/500 technical experts (Specialist Solution Architects and Product Specialists) to execute on the right tasks that are beyond your scope of activities or expertise.</li>
<li>Create, own and execute a PoV as to how key use cases can be accelerated into production, bringing EM/PM in to prepare Professional Services proposals.</li>
<li>Navigate Databricks Product and Engineering teams for New Product Innovations, Private Previews and Upgrade needs (DBR, E2 and Unity Catalog).</li>
<li>Build and maintain an executive level as well as a detailed programme level success plan that covers all activities of Customer, PS, Partner, SSA, Product Specialist, SA to cover the below workstreams:</li>
</ul>
<ul>
<li>Key use cases moving from &#39;win&#39; to production</li>
<li>Enablement / user growth plan</li>
<li>Product adoption (strategy and activities to increase adoption of LH vision)</li>
<li>Organic needs for current investment Eg. Cloud Cost control, Tuning &amp; Optimization</li>
<li>Executive and operational governance</li>
<li>Proactively provide internal and external updates</li>
<li>KPI reporting on the status of consumption and customer health, covering investment status, key risks, product adoption and use case progression to your Technical GM</li>
<li>Development of reusable and scalable assets and mentorship of junior team members to establish the DSA team</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Engineering technologies (e.g. Spark, Hadoop, Kafka), Data Warehousing (e.g. SQL, OLTP/OLAP/DSS), Data Science and Machine Learning technologies (e.g. pandas, scikit-learn, HPO), Executive disciplinary management, Influencing and leading teams, Strategic Management Consulting, Building and steering to a value case, Quota ownership, achievement and track record of great performance against objective target, Proficient in both Korean and English (Native level Korean and Business level English) verbally and in writing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. The company was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8482406002</Applyto>
      <Location>Seoul, South Korea</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c2a8bc1c-4bd</externalid>
      <Title>Staff Software Engineer (L4)</Title>
      <Description><![CDATA[<p>Join the team as our next Staff Software Engineer in the Enterprise AI Engineering team.</p>
<p>Twilio is undergoing a major business transformation powered by Enterprise AI, supported by a dedicated engineering team building the foundations for a unified, secure, and scalable operating system across GTM functions (Sales, Support, Operations, etc.) as well as Internal non-GTM functions (Finance, HR, Legal, etc.).</p>
<p>Our platform is designed to support a multitude of business functions by deploying intelligent agentic solutions that automate complex workflows and deliver unprecedented user experiences.</p>
<p>We&#39;re building the future of work at Twilio, and this role offers the opportunity to be at the forefront of enterprise AI innovation.</p>
<p>This role focuses specifically on transforming how Twilio&#39;s Customer Support organization operates through AI-powered tools and agentic products.</p>
<p>We are looking for Full-Stack Engineers who view AI as a fundamental shift in the software development lifecycle of engineering products and the delivery of beautiful, engaging user experiences.</p>
<p>This is an opportunity to deliver innovative software products and solutions across every business function in the company such as Sales, Marketing, GTM, HR, Finance and Legal.</p>
<p>Joining this team means building production-grade, full-stack AI applications. You won’t just be wrapping APIs; you will be engineering the entire lifecycle of agentic applications.</p>
<p>As a Staff Software Engineer within Enterprise AI, you are the technical heartbeat of our products.</p>
<p>Your role is to bridge the gap between bleeding-edge AI research and robust, full-stack production systems.</p>
<p>Responsibilities:</p>
<ul>
<li>Co-lead the design and development of our software infrastructure, driving technical vision and strategy to ensure scalability, reliability, and performance.</li>
<li>Drive the development of sophisticated, stateful web applications.</li>
<li>Serve as developer leader in distributed systems, data technologies, with strong software engineering skills.</li>
<li>Drive technical innovation and research to stay at the forefront of emerging data technologies and best practices.</li>
<li>Mentor and elevate a team of high-performing engineers.</li>
<li>Collaborate closely with cross-functional teams to understand business requirements and translate them into scalable and efficient technical solutions.</li>
<li>Continuously adapt to the evolving JavaScript ecosystem to maximize engineering efficiency.</li>
<li>Ensure data quality, integrity, and security throughout the data lifecycle, adhering to industry best practices and compliance standards.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or a related field.</li>
<li>8+ years of experience in data engineering, software development, or a related field, with at least 3 years in a technical leadership role.</li>
<li>Experience with full-stack development building web apps, using modern programming languages such as JavaScript, Typescript or React.</li>
<li>Proven track record of architecting and delivering complex data projects at scale, with a deep understanding of data infrastructure and distributed systems.</li>
<li>Strong understanding of data modeling, data warehousing, and ETL processes, with experience designing and optimizing data pipelines.</li>
<li>Excellent communication and collaboration skills, with the ability to influence technical decisions and drive alignment across teams.</li>
<li>Strong leadership skills, with a track record of mentoring and developing high-performing engineering teams.</li>
<li>Demonstrated ability to thrive in a fast-paced, dynamic environment and deliver results under tight timelines.</li>
</ul>
<p>Desired:</p>
<ul>
<li>Experience developing production-quality LLM applications and using modern agent frameworks such as Langchain, Langgraph, Llamaindex, LangSmith, LangFuse, CrewAI, and/or others is a plus.</li>
<li>Expertise in big data technologies such as Hadoop, Spark, Kafka, and cloud-based data services (AWS/GCP/Azure).</li>
</ul>
<p>What We Offer: Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Based on role, employees may also be eligible for additional compensation and benefits, including but not limited to incentive programs, commissions, equity grants, health and wellness benefits, retirement contributions, and paid time off. The estimated pay ranges for this role are as follows:</p>
<ul>
<li>$160,320 - $200,400</li>
<li>Target Bonus Percentage 15% (When Applicable)</li>
</ul>
<p>The successful candidate’s starting salary will be determined based on permissible, non-discriminatory factors such as skills, experience, and geographic location.</p>
<p>Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn&#39;t what you&#39;re looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,320 - $200,400</Salaryrange>
      <Skills>JavaScript, Typescript, React, Data engineering, Software development, Distributed systems, Data technologies, Strong software engineering skills, Technical innovation, Research, Emerging data technologies, Best practices, Langchain, Langgraph, Llamaindex, LangSmith, LangFuse, CrewAI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a US-based technology company that provides cloud communication services. It has acquired several companies and expanded its operations globally.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7714237</Applyto>
      <Location>Remote - Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d94d7ea-9ca</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues.</li>
</ul>
<ul>
<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines.</li>
</ul>
<ul>
<li>Documentation and white-boarding skills.</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts.</li>
</ul>
<ul>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
</ul>
<ul>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have: Databricks Certification</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data platforms &amp; analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, design and deployment of highly performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461330002</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>071153a3-427</externalid>
      <Title>Delivery Partner Manager, Services</Title>
      <Description><![CDATA[<p>At Databricks, we are seeking a Delivery Partner Manager, Services to join our Professional Services team. As a Delivery Partner Manager, you will be responsible for overseeing the Databricks Delivery Partner Program, which aims to utilize close partnerships with key system integrators to effectively co-deliver Databricks services engagements.</p>
<p>Your primary responsibilities will include:</p>
<ul>
<li>Working with the Professional Services team to utilize Databricks delivery partners and accelerate customer project outcomes</li>
<li>Building and developing Professional Services relationships with Global System Integrators (GSIs), Regional System Integrators (RSIs), and the regional Partner team to support bi-directional pipeline review and bi-directional engagements (subcontracting and advisory) for company priorities</li>
<li>Building an RFP process to support Partner Selection for large, complex, multi-year, multi-phase migration deals</li>
<li>Creating templates and scorecards to supplement partner selection with a data-driven approach</li>
<li>Spearheading and leading efforts to incubate and ramp new partners with Gen AI capabilities</li>
<li>Driving MBRs and QBRs with priority partners, and developing long-term roadmaps for PS Partner delivery priorities</li>
<li>Developing scorecards and feedback loops to segment partners by focus areas, regions, and company priorities</li>
<li>Driving delivery governance to ensure partner quality and delivery on Services-led and partner-supported implementations</li>
<li>Supporting global programs to measure partner quality such as CSAT, Partner Survey, and program governance</li>
<li>Leading programs that align with our PS &amp; Partner strategy for Assurance and advisory services</li>
<li>Working with our cross-functional channel teams to enable GSIs, RSIs, SIs on Assurance services, and support bi-weekly reviews and deal governance</li>
<li>Supporting Partner IP efforts such as the creation of service delivery kits to better scale our ecosystem</li>
<li>Liaising with Delivery Partner organizations to ensure personnel have the required level of training and certification to achieve Databricks projects in accordance with best practices</li>
<li>Championing and providing thought leadership for specialization programs</li>
<li>Implementing and leading programs to achieve requirements and exceed delivery quality</li>
<li>Building and maintaining an effective onboarding process for partner resources ensuring effective utilization and quality of services across Databricks customer deliveries</li>
<li>Working with the Databricks resource management team to oversee the pipeline planning process with each of the delivery partners, ensuring partner resource availability in a timely manner to support Databricks service opportunities</li>
<li>Allocating and outsourcing of Projects to service partners</li>
<li>Selecting the appropriate partner for the project and collaborating closely with them to ensure project success</li>
<li>Working with our legal, people operations, and other cross-functional organizations to support bi-directional contractual negotiations</li>
<li>Advocating for the Partner&#39;s needs internally, managing any escalations and issues on projects being delivered by partners</li>
<li>Developing, implementing, and monitoring Key Performance Indicators for each Services partner</li>
<li>Coaching, mentoring junior members of the team, and partner resources</li>
</ul>
<p>We are looking for someone with extensive industry experience, including resource management and demand pipeline management, and a record of success in navigating and driving engagements with large IT organizations, with a focus on managed or professional service delivery. Experience managing partner or third-party organization enablement, including upskilling teams to provide delivery capabilities, is also highly desirable.</p>
<p>If you are a results-driven professional with excellent communication and interpersonal skills, and a passion for delivering high-quality services, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>resource management, demand pipeline management, partner relationship management, project management, service delivery, training and development, communication and interpersonal skills, data analysis and interpretation, problem-solving and critical thinking, Gen AI capabilities, machine learning, data engineering, data science, cloud computing, DevOps, agile methodologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data engineering, data science, and machine learning. It has over 10,000 customers worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8298163002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f18e7306-00c</externalid>
      <Title>Resident Solutions Architect - Financial Services</Title>
      <Description><![CDATA[<p>As a Senior Big Data Solutions Architect (Sr Resident Solutions Architect) in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>The impact you will have:</p>
<ul>
<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
<li>Consult on architecture and design; bootstrap hands-on projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
<li>Provide an escalated level of support for customer operational issues.</li>
<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>9+ years experience in data engineering, data platforms &amp; analytics</li>
<li>Comfortable writing code in either Python or Scala</li>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
<li>Deep experience with distributed computing with Apache Spark and knowledge of Apache Spark runtime internals</li>
<li>Familiarity with CI/CD for production deployments</li>
<li>Working knowledge of MLOps</li>
<li>Capable of design and deployment of highly performant end-to-end data architectures</li>
<li>Experience with technical project delivery - managing scope and timelines.</li>
<li>Documentation and white-boarding skills.</li>
<li>Experience working with clients and managing conflicts.</li>
<li>Experience in building scalable streaming and batch solutions using cloud-native components</li>
<li>Travel to customers up to 20% of the time</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, Databricks, CI/CD, MLOps, technical project delivery, documentation, white-boarding, client management, conflict management, scalable streaming, batch solutions, cloud-native components</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a companies that provides data and AI solutions. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8461325002</Applyto>
      <Location>Philadelphia, Pennsylvania</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc923f59-e03</externalid>
      <Title>Senior Data Engineering Analyst</Title>
      <Description><![CDATA[<p>ZoomInfo is where careers accelerate. We move fast, think boldly, and empower you to do the best work of your life. You&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>
<p>We&#39;re seeking a Senior Data Systems Analyst to become the expert on our company data pipeline,the system that ingests, processes, and profiles millions of company records that power our customers&#39; go-to-market strategies. In this role, you&#39;ll build deep expertise in how our company data flows from acquisition through profiling and output. You&#39;ll read code to understand data transformations and system dependencies, bring informed opinions to design conversations with Engineering and Product, and help shape the evolution of our next-generation data infrastructure.</p>
<p>As you build mastery of our systems, you&#39;ll increasingly lead strategic data improvement initiatives that require both systems thinking and creative problem-solving. This isn&#39;t about building dashboards or SQL reports. This is about understanding data systems at an architectural level, solving ambiguous data challenges, and ensuring our pipeline infrastructure continuously evolves to meet customer needs and maintain competitive advantage.</p>
<p>You&#39;ll work closely with other data analysts during an active infrastructure transition period, and as systems stabilize and your expertise deepens, you&#39;ll progressively own more of the pipeline architecture and strategic initiatives. This is a role with significant growth runway for someone who wants to become the go-to technical expert on company data systems.</p>
<p><strong>Who You Are</strong></p>
<p>Systems Thinker with Technical Depth: You understand how data systems work, not just what they produce. You&#39;ve worked with data pipelines, ETL systems, or data processing infrastructure,maybe you&#39;ve improved one, debugged one, or owned components of one. You can read code (Python, Java, SQL, or similar) well enough to understand data transformations and trace how data flows through systems.</p>
<p>Opinionated Technical Contributor: You don&#39;t just execute,you have informed opinions on how things should work. You can assess technical tradeoffs, evaluate whether a proposed solution is feasible, and contribute meaningfully to design conversations with engineers.</p>
<p>Growth-Oriented Problem Solver: You&#39;re excited to build deep expertise in a complex domain and grow into leading strategic initiatives. You&#39;ve tackled ambiguous problems that required figuring things out as you went, and you want to expand your project leadership capabilities in a systems-focused environment.</p>
<p>Analytical and Hands-On: You&#39;re equally comfortable writing code to analyze data patterns and manually investigating edge cases to understand what&#39;s really happening. You dig into details when needed and know when to zoom out to see the bigger picture.</p>
<p>Clear Communicator: You can explain technical complexity to non-technical audiences. You&#39;ve worked effectively with Engineering, Product, or cross-functional teams, translating between technical constraints and business needs.</p>
<p>Comfortable with Ambiguity: You thrive in evolving environments where priorities shift and problems aren&#39;t always well-defined. You maintain momentum and quality even when the path forward isn&#39;t perfectly clear.</p>
<p><strong>What You&#39;ll Do</strong></p>
<p>In your first 6-12 months, your primary focus will be building deep expertise in our pipeline architecture and contributing to our infrastructure transition. You&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth.</p>
<p>As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>
<p><strong>Build Deep Pipeline &amp; Systems Expertise</strong></p>
<ul>
<li>Master our company data pipeline architecture,how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>
</ul>
<ul>
<li>Read and analyze production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>
</ul>
<ul>
<li>Develop frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>
</ul>
<ul>
<li>Create clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>
</ul>
<p><strong>Contribute to Pipeline Evolution &amp; Infrastructure Improvements</strong></p>
<ul>
<li>Participate actively in design conversations with Engineering and Product about our next-generation pipeline, bringing data quality insights, technical feasibility assessments, and informed opinions on architectural decisions</li>
</ul>
<ul>
<li>Help validate pipeline improvements through rigorous testing, impact analysis, and hands-on verification of data quality</li>
</ul>
<ul>
<li>Translate data quality investigations and emerging requirements into system-level improvement opportunities</li>
</ul>
<ul>
<li>Collaborate with team members to determine when problems should be solved at the pipeline/profiler level versus through downstream approaches</li>
</ul>
<p><strong>Solve Complex, Ambiguous Data Challenges</strong></p>
<ul>
<li>Lead or contribute to data improvement initiatives that require both systems thinking and creative problem-solving,such as improving location verification across international markets, integrating new data sources, or solving novel data extraction challenges</li>
</ul>
<ul>
<li>Tackle problems where the solution isn&#39;t obvious through a blend of code analysis, manual investigation, cross-functional coordination, and iterative problem-solving</li>
</ul>
<ul>
<li>Build and apply repeatable approaches to testing, validation, and root cause analysis</li>
</ul>
<p><strong>Build Partnerships &amp; Institutional Knowledge</strong></p>
<ul>
<li>Develop strong working relationships with Data Acquisition, Product, Engineering, and fellow data analysts</li>
</ul>
<ul>
<li>Conduct impact analyses and validation studies to ensure proposed changes deliver intended outcomes</li>
</ul>
<ul>
<li>Document your learning, approaches, and insights so knowledge is shared and institutional memory builds across the team</li>
</ul>
<ul>
<li>Serve as a technical resource as you develop expertise, helping bridge immediate data quality needs with long-term pipeline capabilities</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>
</ul>
<ul>
<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>
</ul>
<ul>
<li>Experience working with data pipelines, ETL systems, or data processing infrastructure,you understand how data moves through systems and what can go wrong</li>
</ul>
<ul>
<li>Ability to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility</li>
</ul>
<ul>
<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>
</ul>
<ul>
<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>
</ul>
<ul>
<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>
</ul>
<ul>
<li>Strong analytical skills with ability to investigate complex issues systematically</li>
</ul>
<ul>
<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>
</ul>
<ul>
<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>
</ul>
<p><strong>Strongly Preferred</strong></p>
<ul>
<li>Experience with company data, business data, web data acquisition, or data quality initiatives</li>
</ul>
<ul>
<li>Experience with data profiling, entity resolution, record linkage, or data matching systems</li>
</ul>
<ul>
<li>Background contribution</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data analysis, data pipelines, ETL systems, data processing infrastructure, Python, Java, SQL, data transformation, system dependencies, data quality, data profiling, entity resolution, record linkage, data matching</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo provides software solutions for sales and marketing professionals.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8408637002</Applyto>
      <Location>Vancouver, Washington, United States; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>726e518c-28f</externalid>
      <Title>デリバリーソリューションアーキテクト</Title>
      <Description><![CDATA[<p>Job Title: Delivery Solution Architect</p>
<p>We are seeking a highly skilled Delivery Solution Architect to join our team. As a Delivery Solution Architect, you will be responsible for delivering technical solutions to customers and collaborating with sales and field engineering teams to accelerate customer adoption of our platform.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Collaborate with sales and field engineering teams to deliver technical solutions to customers.</li>
<li>Provide technical guidance and support to customers to ensure they get the maximum value and ROI from our platform.</li>
<li>Work closely with customers to understand their business requirements and develop tailored solutions to meet their needs.</li>
<li>Develop and maintain relationships with key stakeholders, including customers, partners, and internal teams.</li>
<li>Collaborate with cross-functional teams to identify and prioritize customer needs and develop solutions to address them.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of experience in delivering technical projects or programs in the data and AI space.</li>
<li>Strong understanding of distributed data systems and solution architecture.</li>
<li>Experience working with customers to deliver technical solutions and providing technical guidance and support.</li>
<li>Strong communication and interpersonal skills, with the ability to work effectively with customers, partners, and internal teams.</li>
<li>Experience working in a fast-paced environment and prioritizing multiple tasks and deadlines.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive benefits package, including health insurance, retirement plan, and paid time off.</li>
<li>Opportunity to work with a leading-edge technology company and contribute to the development of innovative solutions.</li>
<li>Collaborative and dynamic work environment with a team of experienced professionals.</li>
<li>Professional development opportunities, including training and education programs.</li>
</ul>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Distributed data systems, Solution architecture, Customer-facing technical solutions, Technical guidance and support, Cloud computing, Data engineering, Machine learning, Data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8428882002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a77cdb4d-12c</externalid>
      <Title>Senior Manager, Advanced Analytics</Title>
      <Description><![CDATA[<p>We are seeking a Senior Manager for Advanced Analytics to provide critical support in growing high-quality supply and guest engagement. This role will directly report to the Director of Advanced Analytics and work closely with senior stakeholders across multiple cross-functional teams.</p>
<p>The ideal candidate will possess a unique blend of business acumen and data science expertise. They will lead an innovative team focused on delivering data-driven insights and building tools for effective management of the marketplace dynamics.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading and developing a high-performing geographically distributed team, fostering a culture of innovation, agility, growth, and collaboration.</li>
<li>Driving crafts and analytical excellence on advanced data analysis, statistical modeling, experimentation, and metric frameworks.</li>
<li>Serving as a key analytics partner to senior stakeholders, providing data-driven insights to guide key strategic decisions.</li>
<li>Collaborating with teams including Strategic Finance, Data Engineering, Data Science, and Product Marketing in addition to other cross-functional teams.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>A minimum of 10 years experience in advanced analytics, preferably in marketplace and/or travel industries with at least 4 years in people management roles.</li>
<li>Experience in leading and motivating a geographically distributed team.</li>
<li>Deep technical expertise in data analysis, experimentation, causal inference, and familiarity with LLMs.</li>
<li>Robust business acumen, strategic thinking skills, and the ability to make informed judgments.</li>
<li>Outstanding communication skills, capable of engaging with a variety of stakeholders and conveying complex concepts in an accessible manner.</li>
<li>Exceptional stakeholder management skills, with a proven ability to collaborate and influence across functions.</li>
<li>An agile, growth-minded approach, demonstrated through a history of driving projects from ideation to impact.</li>
</ul>
<p>This position is US - Remote Eligible. The role may include occasional work at an Airbnb office or attendance at offsites, as agreed to with your manager.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$212,000-$265,000 USD</Salaryrange>
      <Skills>data analysis, statistical modeling, experimentation, metric frameworks, data engineering, data science, product marketing</Skills>
      <Category>Analytics</Category>
      <Industry>Travel</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals, founded in 2007.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7627054</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c53ecdd3-dc7</externalid>
      <Title>Scale Solution Engineer</Title>
      <Description><![CDATA[<p>As a Scale Solution Engineer at Databricks, you will play a critical role in advising customers during their onboarding process. You will work directly with customers to help them onboard and deploy Databricks in their production environment.</p>
<p>Your impact will be significant, ensuring new customers have an excellent experience by providing technical assistance early in their journey. You will become an expert on the Databricks Platform and guide customers in making the best technical decisions. You will also work directly with multiple customers concurrently to provide technical solutions.</p>
<p>To succeed in this role, you will need:</p>
<ul>
<li>An undergraduate degree or higher in Computer Science, Information Systems, or relevant experience</li>
<li>1+ years experience in a technical role, preferably in the data or cloud field</li>
<li>Knowledge of at least one of the public cloud platforms AWS, Azure, or GCP</li>
<li>Knowledge of a programming language such as Python, Scala, or SQL</li>
<li>Knowledge of end-to-end data analytics workflow</li>
<li>Hands-on professional or academic experience in one or more of the following: Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow), Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake)</li>
<li>Excellent time management and prioritization skills</li>
<li>Excellent written and verbal communication</li>
</ul>
<p>Bonus: Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>public cloud platforms, AWS, Azure, GCP, Python, Scala, SQL, Data Engineering technologies, ETL, DBT, Spark, Airflow, Data Warehousing technologies, Stored Procedures, Redshift, Snowflake, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organisations worldwide rely on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8408817002</Applyto>
      <Location>Costa Rica</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61b49b86-6c8</externalid>
      <Title>Resident Solutions Architect - Manufacturing</Title>
      <Description><![CDATA[<p>As a Resident Solutions Architect (RSA) on our Professional Services team, you will work with customers on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>The impact you will have:</p>
<ul>
<li>Handle a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>
</ul>
<ul>
<li>Provide an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Collaborate with the Databricks Technical, Project Manager, Architect and Customer teams to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>You will report to the regional Manager/Lead.</p>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Comfortable writing code in either Python or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Ability to travel up to 30% when needed</li>
</ul>
<p>Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $180,656-$248,360 USD</p>
<p>Zone 2 Pay Range $180,656-$248,360 USD</p>
<p>Zone 3 Pay Range $180,656-$248,360 USD</p>
<p>Zone 4 Pay Range $180,656-$248,360 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,656-$248,360 USD</Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, distributed computing, Python, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8341313002</Applyto>
      <Location>New York City, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>205a5f25-1f0</externalid>
      <Title>Senior Manager, Infrastructure Data Science</Title>
      <Description><![CDATA[<p>Databricks is looking for a Senior Manager, Infrastructure Data Science to shape the future of Databricks infrastructure through data science. You will tackle some of the most complex challenges related to capacity planning, performance optimisation, reliability engineering, infrastructure efficiency, and customer experience.</p>
<p>At Databricks, we enable data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform.</p>
<p>As a Senior Manager, Infrastructure Data Science, you will lead a team of data scientists and work directly in partnership with engineering leaders to empower them with data-driven insights and solutions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Thought leadership and strategic guidance on infrastructure planning, balancing current needs with future growth projections to ensure scalability and cost-effectiveness.</li>
<li>Promoting a data-driven approach to infrastructure decisions, influencing stakeholders across engineering, and supporting the use of data science insights for high-impact, aligned strategies.</li>
<li>Implementing data-driven solutions to identify, predict, and mitigate infrastructure risks and failures, reducing downtime and improving system reliability and performance, directly impacting end-user satisfaction and operational continuity.</li>
<li>Spearheading analyses to improve resource utilisation efficiency, identifying and eliminating inefficiencies across infrastructure usage, resulting in cost savings and optimised performance.</li>
<li>Establishing data frameworks that empower support teams to troubleshoot and resolve product issues faster, decreasing response times and enhancing customer experience and support quality.</li>
<li>Mentoring and managing a team of data scientists, instilling best practices in data science, engineering, and fostering a collaborative environment focused on innovative, scalable infrastructure solutions.</li>
</ul>
<p>We look for candidates with 10+ years of infrastructure data science, machine learning, advanced analytics experience in high-velocity, high-growth companies, as well as 5+ years of management experience hiring and developing teams.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$228,600-$314,250 USD</Salaryrange>
      <Skills>infrastructure data science, machine learning, advanced analytics, data visualisation, data engineering, data modelling, big data technologies, leadership, communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a global organisation with over 7000 employees, founded in 2013 by the original creators of Apache Spark.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7734812002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c63000ba-f8b</externalid>
      <Title>Senior Staff Machine Learning Engineer, Trust</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Staff Machine Learning Engineer to join our Trust team, which is responsible for developing the technology that helps protect our community and platform from fraud.</p>
<p>As a senior technical individual contributor, you will partner closely with our leaders across the broader technical organization to design, execute and deliver in a complex and collaborative roadmap of Trust engineering efforts.</p>
<p>Your expertise will be crucial in defining and executing on the long-term ML technical vision and strategy for the Trust organization, identifying key investments, architecting scalable solutions, and championing best practices that advance the state-of-the-art in production ML systems.</p>
<p>You will serve as a technical leader and mentor to other ML and software engineers across the organization, providing guidance on complex architectural and modeling challenges, and raising the overall technical bar.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Defining and executing on the long-term ML technical vision and strategy for the Trust organization</li>
<li>Serving as a technical leader and mentor to other ML and software engineers across the organization</li>
<li>Driving and delivering large-scale, multi-quarter ML initiatives that span multiple teams</li>
<li>Working with large scale structured and unstructured data, build and continuously improve cutting edge Machine Learning models for Airbnb product, business and operational use cases</li>
<li>Collaborating with cross-functional partners to identify opportunities for business impact and understand, refine, and prioritize requirements for machine learning models</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>12+ years of industry experience in applied Machine Learning</li>
<li>2-3+ years working with LLMs and novel GenAI technologies</li>
<li>Proficiency and proven experience on Agentic AI (frameworks, orchestration, architecture and productionization)</li>
<li>A Bachelor’s, Master’s or PhD in CS/ML or related field</li>
<li>Strong programming (Scala / Python / Java/ C++ or equivalent) and data engineering skills</li>
<li>Deep understanding of Machine Learning best practices, algorithms, and domains</li>
<li>Experience with AgenticAI, Tensorflow, PyTorch, Kubernetes, and industry experience building end-to-end Machine Learning and Agentic infrastructure</li>
</ul>
<p>Our Commitment To Inclusion &amp; Belonging: Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions.</p>
<p>How We&#39;ll Take Care of You: Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Pay Range $244,000-$305,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$244,000-$305,000 USD</Salaryrange>
      <Skills>Machine Learning, Agentic AI, Tensorflow, PyTorch, Kubernetes, Scala, Python, Java, C++, Data Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a leading online marketplace for short-term vacation rentals, with over 5 million hosts and 2 billion guest arrivals worldwide.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7592146</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>