<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>78270c8d-016</externalid>
      <Title>Operations Data Governance &amp; Controls Specialist</Title>
      <Description><![CDATA[<p>As an Operations Control Specialist – Data Governance &amp; Controls, you will design, implement, and support technical data governance solutions with a focus on the firm&#39;s Trader Master and related reference data domains.</p>
<p>This role requires a strong technical background in Data Management, Data Architecture, Data Lineage, Data Quality, Master Data Management (MDM), and automation within Financial Services and/or Technology.</p>
<p>You will contribute to and help lead the technical design of data governance controls, data models, and integration patterns, partnering closely with Technology and Operations teams.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build/enhance data governance frameworks, controls, standards, and workflows (policies, definitions, entitlements).</li>
<li>Create data quality rules and monitoring; automate exception detection, alerting, remediation, SLAs, and RCA.</li>
<li>Develop Python/SQL/ETL-ELT automation for checks, controls, and reporting; deliver Tableau/Power BI dashboards and KPIs.</li>
<li>Contribute to conceptual/logical/physical data modeling for Trader Master and core domains.</li>
<li>Support MDM capabilities: golden record, matching/merging, survivorship, stewardship workflows; help shape MDM strategy.</li>
<li>Implement access/entitlement governance (RBAC, row/column security) across DB/warehouse/BI with audit compliance.</li>
<li>Maintain catalog, glossary, lineage, schema history, impact analysis; manage structured change workflows.</li>
<li>Define integration patterns (batch/API/streaming) and build reconciliations/validations across systems.</li>
<li>Manage historical/temporal data (validation, backfills, remediation) supporting regulatory/reporting/analytics.</li>
<li>Produce technical documentation (designs, runbooks, data dictionaries), share knowledge, and mentor juniors.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Engineering, Information Systems, Mathematics, Finance, or related field; advanced degree (MS, MBA, or equivalent) is a plus.</li>
<li>5–8 years of experience in financial services or fintech with hands-on work in data engineering, data management, or data architecture roles; exposure to trading strategies, fund structures, and financial products strongly preferred.</li>
</ul>
<p>Technical Expertise (Required):</p>
<ul>
<li>Strong Python and SQL; experience with data warehousing + ETL/ELT.</li>
<li>Familiarity with MDM/data governance tools (e.g., Collibra, Informatica, Alation) and Tableau/Power BI.</li>
<li>Proven ability to lead delivery, solve complex data issues, and communicate with technical/non-technical stakeholders.</li>
<li>Preferred certs: DAMA/CDMP, cloud (AWS/Azure/GCP), Scrum, BI/data engineering.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>The estimated base salary range for this position is $70,000 to $160,000, which is specific to New York and may change in the future.</p>
<p>When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$70,000 to $160,000</Salaryrange>
      <Skills>Python, SQL, ETL/ELT, Data Warehousing, Tableau/Power BI, MDM/data governance tools, Collibra, Informatica, Alation, DAMA/CDMP, cloud (AWS/Azure/GCP), Scrum, BI/data engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Ops &amp; MO Control</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Ops &amp; MO Control provides data governance and control services.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954926796</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7275ef33-009</externalid>
      <Title>Staff Data Engineer</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>
<p>Communicating Between Technical and Non-Technical Colleagues</p>
<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>
<p>Data Analysis and Synthesis</p>
<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>
<p>Data Development Process</p>
<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>
<p>Data Innovation</p>
<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>
<p>Data Integration Design</p>
<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>
<p>Data Modeling</p>
<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>
<p>Metadata Management</p>
<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>
<p>Problem Resolution</p>
<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>
<p>Programming and Build</p>
<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>
<p>Technical Understanding</p>
<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>
<p>Testing</p>
<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,400 to $171,600</Salaryrange>
      <Skills>Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor&apos;s degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops and manufactures a wide range of healthcare products.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976928777</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3fa0b80f-842</externalid>
      <Title>Staff Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>Job Title: Staff Software Engineer, Public Sector</p>
<p>We are seeking a highly skilled Staff Software Engineer to join our Public Sector team. As a Staff Software Engineer, you will be responsible for designing and implementing software solutions for the public sector. You will work closely with cross-functional teams to develop and deploy software applications that meet the needs of government agencies.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement software solutions for the public sector</li>
<li>Work closely with cross-functional teams to develop and deploy software applications</li>
<li>Collaborate with stakeholders to understand their needs and develop software solutions that meet those needs</li>
<li>Develop and maintain software documentation</li>
<li>Participate in code reviews and ensure that code meets quality standards</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or related field</li>
<li>5+ years of experience in software development</li>
<li>Proficiency in programming languages such as Java, Python, or C++</li>
<li>Experience with Agile development methodologies</li>
<li>Strong understanding of software design patterns and principles</li>
<li>Excellent communication and collaboration skills</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s degree in Computer Science or related field</li>
<li>10+ years of experience in software development</li>
<li>Experience with cloud-based technologies such as AWS or Azure</li>
<li>Experience with DevOps practices</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p>Salary Range: $252,000-$362,000 USD</p>
<p>Required Skills:</p>
<ul>
<li>Full Stack Development</li>
<li>Cloud-Native Technologies</li>
<li>Data Engineering</li>
<li>AI Application Integration</li>
<li>Problem Solving</li>
<li>Collaboration and Communication</li>
<li>Adaptability and Learning Agility</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with modern web development frameworks</li>
<li>Familiarity with cloud platforms</li>
<li>Understanding of containerization and container orchestration</li>
<li>Knowledge of ETL processes</li>
<li>Understanding of data modeling, data warehousing, and data governance principles</li>
<li>Familiarity with integrating Large Language Models</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$362,000 USD</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Experience with modern web development frameworks, Familiarity with cloud platforms, Understanding of containerization and container orchestration, Knowledge of ETL processes, Understanding of data modeling, data warehousing, and data governance principles, Familiarity with integrating Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674913005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ad5c420d-b2d</externalid>
      <Title>Senior Solutions Architect - Lakebase</Title>
      <Description><![CDATA[<p>The Solutions Architect (Lakebase) team executes on Databricks&#39; strategic Product Operating Model that provides enhanced focus on earlier stage, highly prioritised product lines in order to establish product market fit, and set the course for rapid revenue growth.</p>
<p>They are part of a global go-to-market team mandate, though individually will cover a specific, local region. Clients may span across one or more business units and verticals.</p>
<p>By working in partnership with direct account teams, they will jointly engage clients, foster the necessary relationships, position in-depth the specific product line, so as to provide compelling reasons for clients to adopt and grow the usage of the given product.</p>
<p>The Solutions Architect (Lakebase) is paired with an Account Executive aligned to a given product line with specific targets accordingly. Together, they will devise and implement a strategy across their assigned set of accounts, develop presentations, demos and other assets and deliver them such that clients make an informed decision as they decide to adopt the product-line in a meaningful way.</p>
<p>The Lakebase product-line requires the following core technical competencies:</p>
<ul>
<li>10+ years of transactional database (OLTP) expertise across engineering, product development, administration, and pre-sales, with a proven track record of designing and delivering client-facing solutions.</li>
<li>Credibility in influencing OLTP products with the market insight needed to shape and prioritise roadmap capabilities.</li>
<li>Experience architecting solutions that integrate transactional data systems within broader Big Data, Lakehouse, and AI ecosystems.</li>
<li>Infrastructure, platform and administration expertise around disaster recovery, high availability, backup and recovery, scale-out methods, identity and security management, migrations (vendor-to-vendor, on-prem to cloud)</li>
</ul>
<p>Impact</p>
<p>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</p>
<p>As a trusted advisor, serve as an expert Solutions Architect and &quot;champion,&quot; building technical credibility with stakeholders to drive product adoption and vision.</p>
<p>Enable clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</p>
<p>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams</p>
<p>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical environments.</p>
<p>Competencies &amp; Responsibilities</p>
<ul>
<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>
<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>
<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organisations to drive customer outcomes.</li>
<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>
<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>
<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>
<li>Undergraduate degree (or higher) in a technical field such as Computer Science, Applied Mathematics, Engineering or similar.</li>
<li>A track record of driving complex projects to completion in fast-paced, customer-facing environments.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Transactional database (OLTP), Cloud infrastructure, Data engineering, Data warehousing, AI, ML, Governance, Transactional systems, App development, Streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8407181002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>859ea28b-09c</externalid>
      <Title>Sr. Manager, Field Engineering - Public Sector</Title>
      <Description><![CDATA[<p>We are seeking a Senior Manager, Solutions Architects to lead our Federal System Integrators and Defense Industrial Base (FSI/DIB) team. You will lead and promote a dynamic team focusing on enterprise software, big data/analytics, data engineering, data science, data warehousing, and generative AI.</p>
<p>Leading the technical sales team, you will partner with Sales (and other Field Engineering technical segments) to increase revenue and help customers become wildly successful. You&#39;ll scale and maintain an outstanding Field Engineering team that operates efficiently to help accelerate Databricks&#39; market growth.</p>
<p>The Impact You Will Have:</p>
<ul>
<li>Alongside the sales Director, you will set the vision and strategy for how this team will accelerate transformative outcomes for the FSIs and DIBs customers</li>
<li>You will hire, train, lead, and grow the Solutions Architect team for a company in high-growth mode</li>
<li>Help your customers achieve exceptional success with Databricks and deliver substantial value to their businesses</li>
<li>You will maintain a robust hiring pipeline at all times</li>
<li>Establish relationships across the business to make your customers and team successful</li>
<li>Keep your team of SAs ahead of the technical curve</li>
</ul>
<p>An SA adds value by maintaining advanced knowledge of the technology stack. You will make sure that your team is continuously learning and working to provide our customers with the most comprehensive solutions for their needs</p>
<p>What We Look For:</p>
<ul>
<li>Proven experience building and leading technical pre-or post-sales teams</li>
<li>5+ years of experience in the data space with a technical product (i.e., data warehousing, big data, machine learning, or more recently with generative AI)</li>
<li>Alternatively, 5+ years of experience in mission-focused space impacting DIB or FSI from a technology perspective</li>
<li>Trusted advisor to technical executives that guide strategic data infrastructure decisions</li>
<li>Lead a team through best practices for technical qualification, technical validations, architecture discussions, and product demonstrations</li>
<li>Strategic mindset focused on driving customer outcomes in an accelerated fashion</li>
<li>Experience hiring candidates that continually raise the bar, ramping them up to be successful, and promoting into larger roles</li>
<li>Create a positive morale for the team and help foster a working relationship between Field Engineering, Sales, and other important internal cross-functional teams</li>
<li>Experience working cross-functionally with other teams such as Sales, Product Management, Engineering, and Customer Success</li>
<li>Technical and business skills to earn the trust of Engineering talent and leadership at Databricks</li>
<li>Demonstrated architectural influence. Able to influence and review complex architectures; guiding your team and customers towards ideal solutions</li>
</ul>
<p>Pay Range Transparency: Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here. Local Pay Range $192,100-$264,175 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,100-$264,175 USD</Salaryrange>
      <Skills>data engineering, data science, data warehousing, generative AI, enterprise software, big data/analytics, technical sales team management, cross-functional team management, strategic planning, hiring and talent development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company providing a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8332835002</Applyto>
      <Location>Maryland; Virginia; Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f1a5b85-116</externalid>
      <Title>Mission Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and motivated Mission Software Engineer to join our dynamic Federal Engineering team. As a part of this team, you will play a critical role in supporting Scale&#39;s government customers by scoping and developing onsite solutions.</p>
<p>Our scalable, high-performance platform is the foundation for these customer solutions, and your expertise will be instrumental in designing and implementing systems that can handle interactions with existing customer systems to help our products integrate into existing customer workflows.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work directly with customers to understand their problems and translate those into features in Scale&#39;s platform.</li>
<li>Be open to &gt;50% travel or relocation to a key customer geographic location.</li>
<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
<li>Implement end-to-end data integrations, syncing customer&#39;s data to Scale&#39;s platform and back.</li>
<li>Deploy and maintain Scale software at customer sites.</li>
<li>Develop customer requested features and work closely with them to ensure that they win customer love.</li>
<li>Build robust and reliable backend systems that can serve as standalone products, empowering customers to accelerate their own AI ambitions.</li>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
</ul>
<p>Ideal Candidate:</p>
<ul>
<li>Track record of success as a hybrid customer facing engineer, forward deployed software engineer, and ability to quickly adapt to different roles.</li>
<li>Prior experience developing with Python and JavaScript, or other modern software languages. Familiarity with Node and React is a plus.</li>
<li>Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus.</li>
<li>Linux experience: Understanding of shell scripting, operating systems, etc.</li>
<li>Networking experience: Understanding of networking technologies, configuration (ports, protocols, etc)</li>
<li>Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles.</li>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval.</p>
<p>Benefits:</p>
<ul>
<li>Comprehensive health, dental and vision coverage,</li>
<li>Retirement benefits,</li>
<li>A learning and development stipend,</li>
<li>Generous PTO,</li>
<li>Commuter stipend</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$138,000-$292,560 USD</Salaryrange>
      <Skills>Python, JavaScript, Node, React, Cloud-Native Technologies, Linux, Networking, Data Engineering, ETL, Data Modeling, Data Warehousing, Data Governance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4481921005</Applyto>
      <Location>Boston, Massachusetts ; Honolulu, HI; San Diego, CA; San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10290548-1ea</externalid>
      <Title>Solutions Architect - Public Sector (LEAPS)</Title>
      <Description><![CDATA[<p>As a Solutions Architect - Public Sector at Databricks, you will be part of the Field Engineering team responsible for leading the growth of the Databricks Unified Analytics Platform. The role involves working with customers, teammates, the product team, and post-sales teams to identify use cases for Databricks, develop architectures and solutions using our platform, and guide customers through implementation to accomplish value.</p>
<p>Key responsibilities include: Partnering with the sales team to help customers understand how Databricks can help solve their business problems Providing technical leadership for customers to evaluate and adopt Databricks Consulting on big data architecture, implementing proof of concepts for strategic customer projects, data science and machine learning projects, and validating integrations with cloud services and other 3rd party applications Building and presenting reference architectures, how-tos, and demo applications for customers Becoming an expert in, and promoting Databricks-inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars Traveling to customers in your region</p>
<p>We look for candidates with 5+ years of experience in a customer-facing pre-sales, technical architecture, or consulting role, with expertise in designing and architecting distributed data systems. Experience with public cloud providers such as AWS, Azure, or GCP, data engineering technologies (e.g., Spark, Hadoop, Kafka), and data warehousing (e.g., SQL, OLTP/OLAP/DSS) is also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Apache Spark, MLflow, Delta Lake, Python, Scala, Java, SQL, R, AWS, Azure, GCP, Data Engineering, Data Warehousing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified analytics platform for data engineering, data analytics, and data science and machine learning.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8320126002</Applyto>
      <Location>Maryland; Virginia; Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f05d190-fce</externalid>
      <Title>Sr. Manager, Field Engineering - Digital Native Business</Title>
      <Description><![CDATA[<p>As the manager of the Digital Natives Solutions Architect (SA) team, you will focus on growing and developing a team of SAs, driving the adoption of the Databricks Platform at the fastest-growing tech companies.</p>
<p>You&#39;ll be responsible for leading the team in establishing best practices throughout the full lifecycle of the customers&#39; workloads. You will help each team member achieve success, productivity, and career growth. You will also represent Databricks as a technical leader with some of its most important customers.</p>
<p>This role will work in close collaboration with sales, services, product, and engineering to drive solutions and outcomes for these highly technical customers. You will utilize excellent communication skills to clearly explain and demonstrate complex solutions to both internal and external stakeholders.</p>
<p>A key responsibility of this role is to hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</p>
<p>Responsibilities:</p>
<ul>
<li>Hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</li>
</ul>
<ul>
<li>Adapt the SA team&#39;s skills and engagement model to match the needs of Digital native customers.</li>
</ul>
<ul>
<li>Consistently meet or exceed targets by making sure the SA team knows how to technically qualify workloads, identify important use cases, build proof of concepts, and establish themselves as trusted advisors throughout the customer life-cycle.</li>
</ul>
<ul>
<li>Travel to customer sites for executive sessions, technical workshops, and building relationships.</li>
</ul>
<ul>
<li>Establish relationships across internal organizations (engineering, product, services, sales, etc.) to ensure the success of the customers and team.</li>
</ul>
<ul>
<li>Stay current with emerging Data and AI trends in the digital native tech sector.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years of experience in the data space with a technical product (i.e. data warehousing, big data, cloud infrastructure, or machine learning).</li>
</ul>
<ul>
<li>5+ years of experience building and leading technical customer-facing teams: hiring, onboarding, and supporting team members in a high-growth environment.</li>
</ul>
<ul>
<li>A history of building a territory, growing strategic accounts, and exceeding targets.</li>
</ul>
<ul>
<li>Inspiring a team vision about the unique nature of the digital natives business.</li>
</ul>
<ul>
<li>A history of execution by managing workloads and consumption with sales, product, and engineering counterparts.</li>
</ul>
<ul>
<li>Experience owning executive alignment in accounts that guide strategic decisions.</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>
<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>
<p>Based on the factors above, Databricks anticipates utilizing the full width of the range.</p>
<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range $192,100-$264,175 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$192,100-$264,175 USD</Salaryrange>
      <Skills>data warehousing, big data, cloud infrastructure, machine learning, technical product, digital native customers, data, analytical, and AI workloads, Solutions Architects, customer-facing teams, hiring, onboarding, and supporting team members, high-growth environment, executive alignment, accounts that guide strategic decisions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8496009002</Applyto>
      <Location>Colorado; Remote - California; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>de79b405-a32</externalid>
      <Title>Sr. Manager, Field Engineering - Financial Services</Title>
      <Description><![CDATA[<p>Job Title: Sr. Manager, Field Engineering - Financial Services</p>
<p>We are seeking a seasoned leader to join our Field Engineering team as a Sr. Manager, Field Engineering - Financial Services. As a key member of our team, you will be responsible for leading a team of Solutions Architects for the Financial Services segment of Databricks&#39; Regulated Industries Field Engineering team.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and promote a diverse team focusing on enterprise software, big data/analytics, data engineering, data science, data warehousing, and generative AI.</li>
<li>Partner with Sales (and other Field Engineering technical segments) to increase revenue and help customers achieve success through Databricks&#39; Data Intelligence Platform.</li>
<li>Scale and maintain an outstanding Field Engineering team that is efficient in its operations to help accelerate Databricks&#39; growth in the market.</li>
</ul>
<p>The Impact You Will Have:</p>
<ul>
<li>Hire, train, and grow a team of Solutions Architects for a company in high-growth mode.</li>
<li>Make your customers extremely successful with Databricks and provide outsized value to their businesses.</li>
<li>Maintain a robust hiring pipeline of highly qualified, bar-raising candidates.</li>
<li>Establish relationships across the business to make your customers and team successful.</li>
<li>Keep your team of SAs ahead of the technical curve by ensuring they maintain advanced knowledge of the Databricks technology stack.</li>
<li>Ensure that your team is continuously learning and working to provide our customers with the most comprehensive solutions for their needs.</li>
</ul>
<p>What We Look For:</p>
<ul>
<li>Proven experience building and leading technical pre-sales teams.</li>
<li>10+ years of experience in the data space with a technical product (i.e., data warehousing, big data, machine learning, or more recently with generative AI).</li>
<li>Trusted advisor to technical executives that guide strategic data infrastructure decisions.</li>
<li>Strong understanding of consumption-driven business models and the recipes for long-term growth and success.</li>
<li>Experience hiring candidates that continually raise the bar, ramping them up to be successful, and promoting into larger roles.</li>
</ul>
<p>Pay Range Transparency:</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range for this role is $192,100-$264,175 USD.</p>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$192,100-$264,175 USD</Salaryrange>
      <Skills>data engineering, data science, data warehousing, generative AI, big data/analytics, enterprise software</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8466584002</Applyto>
      <Location>Illinois; Massachusetts; New York; Virginia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ea3861b8-d43</externalid>
      <Title>Manager, Field Engineering</Title>
      <Description><![CDATA[<p>Job Title: Manager, Field Engineering</p>
<p>We are seeking a highly experienced Manager, Field Engineering to lead a team of Solutions Architects for our Field Engineering team in Latam. As a Manager, Field Engineering, you will be responsible for leading and promoting a dynamic team focusing on enterprise software, big data/analytics, data engineering, data science, data warehousing, and generative AI.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead and promote a dynamic team of Solutions Architects</li>
<li>Partner with Sales and other Field Engineering technical segments to increase revenue and help customers become wildly successful</li>
<li>Scale and maintain an outstanding Field Engineering team that is efficient in its operations to help accelerate Databricks&#39; growth in the market</li>
</ul>
<p>The Impact You Will Have:</p>
<ul>
<li>Hire, train, and grow a team of Solutions Architects for a company in high-growth mode</li>
<li>Make your customers extremely successful with Databricks and provide outsized value to their businesses</li>
<li>Maintain a robust hiring pipeline at all times</li>
<li>Establish relationships across the business to make your customers and team successful</li>
<li>Keep your team of SAs ahead of the technical curve</li>
</ul>
<p>What We Look For:</p>
<ul>
<li>Proven experience building and leading technical pre-or post-sales teams</li>
<li>10+ years of experience in the data space with a technical product</li>
<li>Trusted advisor to technical executives that guide strategic data infrastructure decisions</li>
<li>Strong understanding of consumption-driven business models and the recipes for long-term growth and success</li>
<li>Experience hiring candidates that continually raise the bar, ramping them up to be successful, and promoting into larger roles</li>
</ul>
<p>About Databricks:</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics, and AI.</p>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data science, data warehousing, generative AI, enterprise software, big data/analytics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8360067002</Applyto>
      <Location>Sao Paulo, Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bfddfcc3-e38</externalid>
      <Title>Senior Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will lead the development of a vertical feature or a horizontal capability to include defining requirements with stakeholders and implementation until it is accepted by the stakeholders.</p>
<p>You will:</p>
<p>Lead the design and implementation of scalable backend systems and distributed architectures for Federal customers. Manage the full lifecycle of feature development from requirement definition to deployment on classified networks. Direct the orchestration of asynchronous agent fleets to meet mission requirements. Lead customer engagements to translate mission needs into technical requirements. Own the communication with stakeholders to ensure implementation meets defined acceptance criteria. Conduct technical reviews and identify risks within machine learning infrastructure and model serving. Drive the platform roadmap by providing technical specifications for Federal product offerings.</p>
<p>Ideally you will have:</p>
<p>Full Stack Development: Proficiency in front-end, back-end development and infrastructure, including experience with modern web development frameworks, programming languages, and databases Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles AI Application Integration: Familiarity with integrating Large Language Models (LLMs) and building agentic workflows. Understanding of prompt engineering, retrieval-augmented generation (RAG), and agent orchestration is beneficial. Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to defining and evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$311,000 USD (San Francisco, New York, Seattle) $194,400-$279,000 USD (Hawaii, Washington DC, Texas, Colorado) $162,400-$233,000 USD (St. Louis)</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Docker, Kubernetes, AWS, Azure, GCP, ETL, data modeling, data warehousing, data governance, Large Language Models, prompt engineering, retrieval-augmented generation, agent orchestration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674911005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>03224784-9c2</externalid>
      <Title>Senior Data Engineering Manager</Title>
      <Description><![CDATA[<p>Job Title: Senior Data Engineering Manager</p>
<p>Location: Dublin, Ireland</p>
<p>Department: R&amp;D</p>
<p>Job Description:</p>
<p>Intercom is seeking a Senior Data Engineering Manager to lead the design and evolution of the core infrastructure that powers our entire data ecosystem. As a leader, you will partner with product and business teams to drive key data initiatives and ensure the success of our data engineering team.</p>
<p>Responsibilities:</p>
<ul>
<li>Next-Gen Platform Evolution: Partner with product and business teams to design and implement the next generation of our data stack, ensuring it can meet the demands of advanced analytics and AI applications.</li>
</ul>
<ul>
<li>Enablement Through Tooling: Partner closely with Analytics Engineers, Analysts, and Data Scientists to build self-service tooling and infrastructure that enables them to move fast and deploy safely.</li>
</ul>
<ul>
<li>Data Quality Guardianship: Implement advanced monitoring systems to proactively detect, surface, and resolve data quality issues across our high-throughput environment.</li>
</ul>
<ul>
<li>Driving Automation: Develop automation and tooling that streamlines the creation and discovery of high-quality analytics data, making the entire data lifecycle more efficient.</li>
</ul>
<p>Strategic Impact You&#39;ll Drive:</p>
<ul>
<li>GTM Data Platform Strategy: Build the data acquisition strategy that will enable us to build the next generation of business-focused internal software.</li>
</ul>
<ul>
<li>Conversational BI Strategy: Lead the charge to shift away from complex, technical reporting toward natural language interaction to make data truly democratized and accessible.</li>
</ul>
<ul>
<li>Platform &amp; Warehousing Strategy: Lead the architectural- and cost review and revamp of our core data infrastructure to ensure it can scale exponentially for future growth and advanced use cases.</li>
</ul>
<p>Recent Wins You&#39;ll Build Upon:</p>
<ul>
<li>AI-assisted Local Analytics Development Environment for Airflow and DBT.</li>
</ul>
<ul>
<li>Data-rich AI apps containerized on Snowflake SPCS.</li>
</ul>
<ul>
<li>A new, modern data catalog solution.</li>
</ul>
<ul>
<li>Migrating critical MySQL ingestion pipelines from Aurora to PlanetScale.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>A leader, a builder, and a problem-solver who thrives on solving real-world business problems.</li>
</ul>
<ul>
<li>7+ years of experience in the data space, leading teams of 6+ engineers.</li>
</ul>
<ul>
<li>Stakeholder focus: ability to communicate complex technical solutions to a business-focused audience and vice versa.</li>
</ul>
<ul>
<li>Technical depth: not afraid to get hands dirty and write code when needed.</li>
</ul>
<ul>
<li>A leader and mentor: naturally recognizes opportunities to step back and mentor others.</li>
</ul>
<p>Bonus Points (Our Modern Stack Knowledge):</p>
<ul>
<li>Airflow at scale: extensive experience working with Apache Airflow, especially the nuances of operating it reliably in a high-volume environment.</li>
</ul>
<ul>
<li>Modern data stack fluency: familiarity with tools like Snowflake and DBT.</li>
</ul>
<ul>
<li>Future-focused: keeps a keen eye on industry trends and emerging technologies.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and equity in a fast-growing start-up.</li>
</ul>
<ul>
<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen.</li>
</ul>
<ul>
<li>Regular compensation reviews - we reward great work!</li>
</ul>
<ul>
<li>Pension scheme &amp; match up to 4%.</li>
</ul>
<ul>
<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents.</li>
</ul>
<ul>
<li>Open vacation policy and flexible holidays so you can take time off when you need it.</li>
</ul>
<ul>
<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones.</li>
</ul>
<ul>
<li>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too.</li>
</ul>
<ul>
<li>MacBooks are our standard, but we also offer Windows for certain roles when needed.</li>
</ul>
<p>Policies:</p>
<ul>
<li>Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home.</li>
</ul>
<ul>
<li>We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Apache Airflow, DBT, Snowflake, Data Engineering, Data Science, Analytics, Data Management, Data Quality, Automation, Cloud Computing, Data Warehousing, Big Data, Machine Learning, Artificial Intelligence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that provides customer experiences for businesses. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7574762</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6e715f09-34a</externalid>
      <Title>Solutions Architect, Commercial</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</p>
<p>As a Solutions Architect (SA), your role is to help prospects understand the value that dbt Cloud can bring to their organisation. You&#39;ll lead technical discussions with prospective customers, uncover their data challenges, and demonstrate how dbt Cloud can meet their needs through live demos and technical workshops.</p>
<p>Internally, you&#39;ll contribute to building and refining our processes and playbooks, while acting as the voice of the customer to ensure we continue developing products that solve real problems and delight our users.</p>
<p>Responsibilities</p>
<ul>
<li>Spend the majority of your time on pre-sales opportunities to understand customer use cases and identify the value dbt Cloud can bring to their organisation</li>
</ul>
<ul>
<li>Consult with potential customers to uncover their data challenges and highlight how dbt Cloud can alleviate pain points and support their goals</li>
</ul>
<ul>
<li>Own the demonstration of technical and business impact by delivering tailored demos of dbt Cloud and providing expert guidance</li>
</ul>
<ul>
<li>Collaborate closely with Account Executives, building strong, trust-based relationships and offering strategic input throughout the deal process</li>
</ul>
<ul>
<li>Maintain ongoing relationships with technical stakeholders within your accounts</li>
</ul>
<ul>
<li>Partner with internal teams to improve how we work together, represent the voice of the customer in product discussions, and contribute to cross-functional initiatives</li>
</ul>
<ul>
<li>Participate in our knowledge-sharing loop by helping improve team processes, refining assets, and enabling others through collaboration</li>
</ul>
<ul>
<li>Create and deliver external-facing content through live events, blog posts, recorded tutorials, or other educational resources</li>
</ul>
<p>Requirements</p>
<ul>
<li>4+ years of experience as a data practitioner or consultant in data operations or analytics</li>
</ul>
<ul>
<li>Strong technical foundation, with a solid understanding of modern data warehousing architectures, the modern data stack, and proficiency in SQL</li>
</ul>
<ul>
<li>Prior experience with dbt is preferred; bonus points for dbt certification</li>
</ul>
<ul>
<li>Experience working asynchronously as part of a partially remote, distributed team</li>
</ul>
<ul>
<li>High degree of comfort presenting to diverse stakeholders or audiences, ideally in an externally facing role</li>
</ul>
<ul>
<li>Ability to operate effectively in ambiguous, fast-paced environments and think on your feet during customer conversations</li>
</ul>
<ul>
<li>Desire to be a strong team player,both within the SA team as we evolve our ways of working, and in daily collaboration with Account Executives as part of a deal team</li>
</ul>
<ul>
<li>Openness to travel; we value the power of in-person relationship building, both internally and externally, and expect willingness to travel to events as needed</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Basic python competency and advanced SQL knowledge</li>
</ul>
<ul>
<li>Proven experience in a pre-sales tech role</li>
</ul>
<ul>
<li>Experience with traditional ETL tooling</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
</ul>
<ul>
<li>401k plan with 3% guaranteed company contribution</li>
</ul>
<ul>
<li>Comprehensive healthcare coverage</li>
</ul>
<ul>
<li>Generous paid parental leave</li>
</ul>
<ul>
<li>Flexible stipends for:</li>
</ul>
<ul>
<li>Health &amp; Wellness</li>
</ul>
<ul>
<li>Home Office Setup</li>
</ul>
<ul>
<li>Cell Phone &amp; Internet</li>
</ul>
<ul>
<li>Learning &amp; Development</li>
</ul>
<ul>
<li>Office Space</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab&#39;s total rewards during your interview process.</p>
<p>OTE Range (Select Locations)</p>
<p>$145,000-$172,000 USD</p>
<p>OTE Range</p>
<p>$142,000-$202,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$145,000-$172,000 USD</Salaryrange>
      <Skills>data operations, analytics, modern data warehousing architectures, modern data stack, SQL, dbt, python, pre-sales tech role, ETL tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, helping data teams transform raw data into reliable, actionable insights.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4671489005</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>53024247-9d6</externalid>
      <Title>Senior Solutions Architect - Lakewatch</Title>
      <Description><![CDATA[<p>We are seeking a Senior Solutions Architect to join our Lakewatch team in London. As a Senior Solutions Architect, you will provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment.</p>
<p>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory, driving Lakewatch adoption from initial data offload through full SIEM augmentation or replacement.</p>
<p>As a trusted advisor, serve as an expert Solutions Architect building technical credibility with CISOs, security architects, SOC leadership, and security analysts to drive product adoption and vision.</p>
<p>Enable clients at scale through workshops, POC execution, and developing customer-facing collateral that increases technical knowledge and demonstrates the value of an open agentic SIEM architecture.</p>
<p>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams.</p>
<p>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical security environments.</p>
<p>Establish and refine the sales qualification and POC intake process, ensuring well-scoped engagements that maximize customer success and minimize friction for R&amp;D.</p>
<p>The ideal candidate will have 5+ years of experience in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level security strategy and product adoption.</p>
<p>Experience with design and implementation of data and AI applications in cybersecurity, including anomaly detection, behavioral analytics, and agentic AI workflows for triage and investigation.</p>
<p>Proficient in programming, debugging, and problem-solving using SQL and Python and with AI tools.</p>
<p>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes in cybersecurity.</p>
<p>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP), with an understanding of cloud-native security logging and monitoring.</p>
<p>Deep experience in security operations, with broad familiarity across one or more of the following: data engineering, data warehousing, AI/ML for security, data governance, and streaming.</p>
<p>Undergraduate degree (or higher) in a technical field such as Computer Science, Cybersecurity, Applied Mathematics, Engineering or similar.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cybersecurity engineering, security operations, security architecture, design and implementation of data and AI applications, anomaly detection, behavioral analytics, agentic AI workflows, SQL, Python, AI tools, cloud-native security logging and monitoring, data engineering, data warehousing, AI/ML for security, data governance, streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that unifies and democratizes data, analytics, and AI for over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8493140002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>62bc6bad-261</externalid>
      <Title>Lead Solutions Architect (Pre-sales) – Public Sector</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re seeking a Lead Solutions Architect to join our Japan Public Sector team. As a senior technical leader, you will be responsible for shaping and executing the data and AI strategy for key public sector customers in Japan.</p>
<p>Your impact will be significant, as you will own the technical strategy for these customers, defining the target architecture and roadmap for their Databricks-based data and AI platform. You will also lead solution design for priority government use cases, such as citizen 360 and eligibility, program analytics, smart infrastructure, cybersecurity, and fraud/waste/abuse.</p>
<p>In addition, you will design for security and compliance, ensuring architectures meet public sector expectations on data residency, access control, governance, and auditability. You will turn pilots into platforms, linking Databricks to visible mission outcomes and using those wins to drive expansion across bureaus, departments, and levels of government.</p>
<p>As a mentor, you will coach Solutions Architects and partners on how to work effectively with Japan public sector stakeholders and patterns on the Databricks platform.</p>
<p>We&#39;re looking for someone with 10+ years of experience in data platforms, analytics, or cloud architecture, including significant experience with public sector organizations in Japan. You should have a proven ability to build trust with senior government stakeholders and lead high-stakes technical discussions in Japanese.</p>
<p>You should also have a strong technical background in Data Engineering, Data Warehousing/Analytics, AI/ML, or Cloud Architecture, with a focus on solution design and platforms. Experience architecting data and analytics solutions for public sector use cases is essential.</p>
<p>If you&#39;re excited about helping Japan&#39;s Public Sector use data and AI to deliver better outcomes – while setting the technical bar for our GTM in Japan – we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Engineering, Data Warehousing/Analytics, AI/ML, Cloud Architecture, Solution Design, Platforms, Public Sector Experience, Japanese Language, Business-Level English</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data and analytics.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8437076002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6b0c92b4-c05</externalid>
      <Title>Sr. Manager, Field Engineering</Title>
      <Description><![CDATA[<p>We are looking for a dynamic Sr. Manager, Field Engineering to lead a team of Solution Architects within our Retail vertical. As a key member of our Field Engineering team, you will be responsible for helping customers in the Consumer Goods segment succeed with Databricks and providing outsized value to their businesses. You will also be responsible for maintaining a robust hiring pipeline, establishing relationships across the business, and partnering with sales leadership to hit sales and consumption targets.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Hiring, training, growing, and managing a team of Solutions Architects</li>
<li>Making customers in the Consumer Goods segment successful with Databricks and providing outsized value to their businesses</li>
<li>Maintaining a robust hiring pipeline at all times</li>
<li>Establishing relationships across the business to make customers and team successful</li>
<li>Partnering with sales leadership to hit sales and consumption targets</li>
</ul>
<p>The ideal candidate will have 7+ years of professional experience in the data space with a technical product, 3+ years of experience in the field, architecting and delivering data-driven solutions for major accounts within the Retail, Consumer Products, Travel &amp; Hospitality vertical, and a deep familiarity with the buy-side and supply-side ecosystem.</p>
<p>In addition, the candidate should have demonstrated expertise with data collaboration ecosystem, 3+ years of experience building and leading technical pre-sales teams, a deep technical understanding of the impact that Data + AI can drive within the Retail industry, and trusted advisor to technical executives who guide strategic data infrastructure decisions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$192,100-$264,175 USD</Salaryrange>
      <Skills>data warehousing, big data, machine learning, data collaboration ecosystem, customer data platforms, clean rooms, data marketplaces</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8362888002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86363ae6-10f</externalid>
      <Title>Manager, Field Engineering - Strategic Digital Native Business</Title>
      <Description><![CDATA[<p>As the manager of the Digital Natives Solutions Architect (SA) team, you will focus on growing and developing a team of SAs, driving the adoption of the Databricks Platform at the largest, fastest-growing tech companies.</p>
<p>You&#39;ll be responsible for leading the team in establishing best practices throughout the full lifecycle of the customers&#39; workloads. You will help each team member achieve success, productivity, and career growth. You will also represent Databricks as a technical leader with some of its most important customers.</p>
<p>This role will work in close collaboration with sales, services, product, and engineering to drive solutions and outcomes for these highly technical customers. You will utilize excellent communication skills to clearly explain and demonstrate complex solutions to both internal and external stakeholders.</p>
<p>Responsibilities:</p>
<ul>
<li>Hire and develop a team of deeply technical Solutions Architects capable of guiding digital native customers across a wide range of data, analytical, and AI workloads.</li>
</ul>
<ul>
<li>Adapt the SA team&#39;s skills and engagement model to match the needs of Digital native customers</li>
</ul>
<ul>
<li>Consistently meet or exceed targets by making sure the SA team knows how to technically qualify workloads, identify important use cases, build proof of concepts, and establish themselves as trusted advisors throughout the customer life-cycle</li>
</ul>
<ul>
<li>Travel to customer sites for executive sessions, technical workshops, and building relationships</li>
</ul>
<ul>
<li>Establish relationships across internal organizations (engineering, product, services, sales, etc.) to ensure the success of the customers and team</li>
</ul>
<ul>
<li>Stay current with emerging Data and AI trends in the digital native tech sector</li>
</ul>
<p>What we look for:</p>
<ul>
<li>4+ years of experience in the data space with a technical product (i.e. data warehousing, big data, cloud infrastructure, or machine learning)</li>
</ul>
<ul>
<li>3+ years of experience building and leading technical customer-facing teams: hiring, onboarding, and supporting team members in a high-growth environment</li>
</ul>
<ul>
<li>A history of building a territory, growing strategic accounts, and exceeding targets</li>
</ul>
<ul>
<li>Inspiring a team vision about the unique nature of the digital natives business</li>
</ul>
<ul>
<li>A history of execution by managing workloads and consumption with sales, product, and engineering counterparts</li>
</ul>
<ul>
<li>Experience owning executive alignment in accounts that guide strategic decisions</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range $172,500-$237,150 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$172,500-$237,150 USD</Salaryrange>
      <Skills>data warehousing, big data, cloud infrastructure, machine learning, data analysis, AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8458032002</Applyto>
      <Location>Remote - California; Remote - Colorado; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45c42a6e-519</externalid>
      <Title>Customer Solutions Architect (Austin)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Customer Solutions Architect to help current customers gain the most value from their dbt Cloud deployment. As a Customer Solutions Architect, you will monitor an existing book of business, lead technical discussions with customers, uncover new data challenges, and showcase how dbt Cloud can address their needs through live demos and technical workshops.</p>
<p>Responsibilities</p>
<ul>
<li>Manage a portfolio of Commercial or Enterprise customers and proactively monitor the health of your accounts, product adoption, and utilization across your book of business to identify opportunities for potential expansion and churn or contraction risks</li>
<li>Improve customer loyalty and retention through building and maintaining strong relationships with key technical stakeholders within customer accounts, and understanding their analytics needs and use cases to determine where dbt Cloud can help them achieve their goals</li>
<li>Increase the value customers obtain from dbt Cloud through educating your customer base on new products and features as they are launched and on existing products and features that they may not be making use of</li>
<li>Collaborate closely with Sales Directors and Solutions Architects on your accounts, building strong trust-based relationships and providing strategic input on the customer lifecycle and renewals processes</li>
<li>Be the voice of the customer in product discussions, work with the team to improve the way we work together, and participate in other cross-functional activities</li>
<li>Participate in the knowledge loop helping to improve our processes and assets and enabling others on the team</li>
<li>Create and deliver external facing content through live events, blog posts, recorded tutorials, or other content</li>
</ul>
<p>What You&#39;ll Need</p>
<ul>
<li>2+ years of experience in a post-sales role, such as a technical account manager or CSE</li>
<li>A solid technical background, with a firm understanding of modern data warehousing architectures, the analytics stack, and SQL proficiency</li>
<li>High degree of comfort presenting to various stakeholders or audience, ideally with experience in an externally facing role</li>
<li>Ability to operate in an ambiguous and fast-paced environment and think on your feet when engaged in customer conversations</li>
<li>Desire to be part of a team, both as an active member of the Customer Solutions Architect team as we continue to evolve and improve how we work, and in your day-to-day work with Sales Directors and Solutions Architects</li>
<li>Openness to travel</li>
</ul>
<p>What Will Make You Stand Out</p>
<ul>
<li>Bonus points for dbt certification, prior dbt experience will be very helpful in this role</li>
<li>Basic python competency and advanced SQL knowledge</li>
<li>Have experience with ancillary tools, managing data infrastructure, APIs, etc</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
<li>401k plan with 3% guaranteed company contribution</li>
<li>Comprehensive healthcare coverage</li>
<li>Generous paid parental leave</li>
<li>Flexible stipends for:</li>
<li>Health &amp; Wellness</li>
<li>Home Office Setup</li>
<li>Cell Phone &amp; Internet</li>
<li>Learning &amp; Development</li>
<li>Office Space</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs total rewards during your interview process.</p>
<p>OTE Range (Select Locations)</p>
<p>$110,000-$140,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$110,000-$140,000 USD</Salaryrange>
      <Skills>modern data warehousing architectures, analytics stack, SQL proficiency, post-sales role, technical account manager, dbt certification, prior dbt experience, basic python competency, advanced SQL knowledge, ancillary tools, managing data infrastructure, APIs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4664399005</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3b6af70e-6ba</externalid>
      <Title>Field Engineering, Data Warehousing Product Specialist</Title>
      <Description><![CDATA[<p>As the Data Warehousing Product Specialist in the Field Engineering team, you will be defining and driving our technical go-to-market strategy in the Asia Pacific and Japan region.</p>
<p>You will be directly working with customers to guide and influence their data warehousing architecture and decisions. You will be acting as a trusted advisor for senior executives and your in-depth technical knowledge will ensure our customers are successful in leveraging Databricks to solve their business problems.</p>
<p>You will be the technical expert to support our field engineering teams internally and you will be expected to help enable the team to understand the key differentiators of our product against our competitors.</p>
<p>You will partner with the Product Manager(s) to help to define the product direction based on local knowledge and inform our product strategy with our go-to-market field teams.</p>
<p>You will not have any direct reports but will recruit and lead a group of specialists across the field dedicated to scale your impact.</p>
<p>You will also be a thought leader externally to the market via speaking at conferences, online webinars, and blog posts.</p>
<p>You will meet with customers to communicate the vision and gather feedback.</p>
<p>You have expertise in cloud-based data warehousing technologies, modern data platform architectures, traditional data warehousing techniques, and preferably industry domain knowledge.</p>
<p>You will excel in creating and articulating a compelling value proposition for our customers and enabling Account Executives and Field Engineers to operate effectively using best practices and assets that you own and develop.</p>
<p>The impact you will have:</p>
<ul>
<li>Lead the vision and strategy for Data Warehousing Adoption in Asia Pacific and Japan</li>
</ul>
<ul>
<li>Develop materials for our GTM team to effectively communicate our value proposition</li>
</ul>
<ul>
<li>Support our field teams in key strategic pursuit opportunities</li>
</ul>
<ul>
<li>Lead a team of subject matter experts (SMEs) to scale your impact in local regions and enable the broader field team to have confidence in competing in the market</li>
</ul>
<ul>
<li>Act as the thought leader to build confidence and relationships with our customers at the executive level</li>
</ul>
<ul>
<li>Present externally at conferences, events, and webinars and publish content to establish our position in the market</li>
</ul>
<ul>
<li>Work with post sales and partners to establish migration best practices</li>
</ul>
<ul>
<li>Act as a cross-functional representative with Product Management, Product Marketing, and Engineering on our go-to-market motion</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data warehousing, cloud-based data warehousing technologies, modern data platform architectures, traditional data warehousing techniques, industry domain knowledge</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8151567002</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>40db054b-06d</externalid>
      <Title>Senior Product Manager, Access</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Technical Product Manager to join our Access team within the Acuity Scheduling department. As a Senior Technical Product Manager for Acuity Scheduling, you&#39;ll own the systems that control how customers sign in, manage identity, and pay for the platform.</p>
<p>This is a hybrid role working 3 days per week from our Aveiro office. You will report to the Group Product Manager on the Acuity Scheduling team.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Product Ownership: Act as the primary product owner for your team, setting the roadmap and priorities based on technical feasibility, user impact, and business goals.</li>
<li>Technical Strategy &amp; Roadmap: Collaborate with engineers to define a scalable technical strategy that aligns with product goals, focusing on data architecture, systems design, and service-based solutions.</li>
<li>Cross-functional Collaboration: Partner with engineering, data science, and UX teams to understand requirements, manage trade-offs, and deliver solutions that balance speed and scalability.</li>
<li>Cross-organization Collaboration: Work directly with Squarespace Identity and Security teams to develop and realise a shared vision of a singular identity and authentication system.</li>
<li>Stakeholder Communication: Translate technical architecture and system requirements into clear, actionable items for stakeholders across the company, including senior leadership and non-technical teams.</li>
<li>Quality, Security &amp; Performance Optimisation: Focus on the system&#39;s stability, reliability, and scalability by working closely with the engineering team on continuous improvement, platform security and technical debt management.</li>
<li>Architecture Oversight: Guide architectural decisions to ensure optimal security, data flow, storage, and access within our product ecosystem. Advocate for sustainable choices in a service-oriented approach to component-based architecture.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience: 4-6 years in product management or a technical role with product ownership, preferably within a data-driven environment; ideally in an identity-centric role.</li>
<li>Technical Expertise: Strong background as the technical product lead on teams owning data architecture, systems design, and service-based architecture (e.g., microservices), with the ability to engage deeply in technical discussions and decisions.</li>
<li>Systems Thinking: Proven experience in end-to-end system thinking and design, including a strong grasp of component-based architectures, data storage options, and integration layers.</li>
<li>Data Architecture: Hands-on experience with data modeling, database design, and data warehousing principles, including familiarity with large data model improvement initiatives.</li>
<li>APIs &amp; Integration: Understanding of RESTful API design, OAuth, identity federation, and integration patterns to ensure seamless interoperability between services and systems.</li>
<li>Analytical Mindset: Proficiency in using data to inform decisions autonomously, including experience with data analysis and product analytics tools.</li>
<li>Communication Skills: Ability to communicate complex technical concepts to both technical and non-technical audiences, bridging the gap between product vision and technical execution.</li>
<li>Agile Experience: Familiarity with Agile methodologies, including backlog management, sprint planning, and cross-functional team collaboration.</li>
<li>Project Management: Familiarity with project management tools like Jira, Asana, or similar.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Technical Transformation: Experience leading teams and/or organisations from a monolithic architecture to a service-oriented one.</li>
<li>Identity Experience: High-level understanding of and experience with core user registration flows and OAuth as it pertains to user identity needs.</li>
<li>Technical Documentation: Experience documenting technical data architecture, service flows, and system dependencies to ensure alignment and knowledge-sharing within the team</li>
</ul>
<p><strong>Benefits &amp; Perks</strong></p>
<ul>
<li>Health insurance with 100% covered premiums for you, your spouse or partner, and dependent children, including medical, dental, and vision</li>
<li>Life and Disability Insurance</li>
<li>Pension benefits with employer match</li>
<li>Fertility and adoption benefits</li>
<li>Headspace mindfulness app subscription</li>
<li>Global Employee Assistance Program</li>
<li>Statutory paid time off and all statutory leaves, as required</li>
<li>Meal Allowance and Flex Benefits Account</li>
<li>Employee donation match to community organisations</li>
<li>In the easily accessible city centre of Aveiro</li>
<li>7 Global Employee Resource Groups (ERGs)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data architecture, systems design, service-based architecture, microservices, data modeling, database design, data warehousing, RESTful API design, OAuth, identity federation, integration patterns, data analysis, product analytics tools, Agile methodologies, backlog management, sprint planning, cross-functional team collaboration, project management tools, Jira, Asana</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Squarespace</Employername>
      <Employerlogo>https://logos.yubhub.co/squarespace.com.png</Employerlogo>
      <Employerdescription>Squarespace is a design-driven platform helping entrepreneurs build brands and businesses online. It has a team of over 1,700 employees and operates in more than 200 countries.</Employerdescription>
      <Employerwebsite>https://www.squarespace.com/about/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/squarespace/jobs/7698954</Applyto>
      <Location>Aveiro</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1a5fc9df-c32</externalid>
      <Title>Lead Solutions Consultant | DACH</Title>
      <Description><![CDATA[<p>As a Lead Solutions Consultant, you will leverage your professional tenure, robust business acumen, and in-depth technical expertise to guide prospective customers through their Airtable evaluation and expansion.</p>
<p>You will partner across lines of business as you lead the technical evaluation and compel decision-makers to choose Airtable as their critical operational workflow management tool.</p>
<p>The Solutions Consulting team at Airtable sits at the confluence of Sales, Services, Product, and Marketing and takes a customer-centric approach to creating value through intentful listening, storytelling, expert guidance, tailored solutions, and applying a highly consultative process.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with the Pre-Sales teams to identify, qualify, and drive revenue opportunities at Enterprise accounts - net new and install motions</li>
</ul>
<ul>
<li>Build meaningful relationships with and serve as a trusted advisor to a prospect or customer&#39;s technical teams</li>
</ul>
<ul>
<li>Own the technical evaluation process showcasing customer-centric value within the context of the customer&#39;s unique requirements, workflows, and business process needs</li>
</ul>
<ul>
<li>Translate the value of Airtable&#39;s tooling into the language of the customer</li>
</ul>
<ul>
<li>Manage multiple sales cycles simultaneously, proactively communicating with stakeholders and prioritising effectively</li>
</ul>
<ul>
<li>Consult with credibility, bringing a well-rounded and unique perspective with hands-on cross-functional experience,i.e., direct experience in marketing, program/project management, consulting, operations, etc.</li>
</ul>
<ul>
<li>Inform the product team and ultimately the product roadmap with meaningful market potential findings</li>
</ul>
<ul>
<li>Represent Airtable in every engagement with the utmost care</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bilingual: Fluent in both English and German</li>
</ul>
<ul>
<li>An experienced customer-centric professional with a well-balanced skillset spanning business acumen, technical knowledge, and effective communication skills</li>
</ul>
<ul>
<li>7+ years of experience in SaaS Pre-Sales</li>
</ul>
<ul>
<li>10+ years industry experience</li>
</ul>
<ul>
<li>Experience managing sales cycles and owning the technical evaluation</li>
</ul>
<ul>
<li>Working knowledge of Marketing, Product Operations, Retail MGMT, and other Business Units</li>
</ul>
<ul>
<li>Knowledge of business operations - can make assumptions as it relates to value drivers for various personas and industries</li>
</ul>
<ul>
<li>Understands best practices for structuring data in a relational database and database architecture</li>
</ul>
<ul>
<li>Passionate about creating unique solutions for complex business problems</li>
</ul>
<ul>
<li>Loves the spotlight - adaptable and compelling communicator and presenter, easily matches the language of the room, author and storyteller</li>
</ul>
<ul>
<li>Thirst for knowledge - researcher, analyst, emerging trends enthusiast</li>
</ul>
<ul>
<li>Technically savvy, demonstrating a deep understanding of enterprise integration methodologies, best practices and API architecture</li>
</ul>
<ul>
<li>Low-code no-code application experience throughout the customer life cycle</li>
</ul>
<ul>
<li>Knowledge of complementary technologies and products - including but not limited to data warehousing and AI systems</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Passion and creativity</li>
</ul>
<ul>
<li>Autonomous and motivated</li>
</ul>
<ul>
<li>Purposeful, thoughtful, and detail-oriented</li>
</ul>
<ul>
<li>Effective time and project management</li>
</ul>
<ul>
<li>Curiosity</li>
</ul>
<ul>
<li>Other</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>BS/BA, MBA preferred or equivalent experience</li>
</ul>
<ul>
<li>Airtable knowledge or certification preferred</li>
</ul>
<ul>
<li>Command of the Message preferred</li>
</ul>
<ul>
<li>Travel as needed</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS Pre-Sales, Business Acumen, Technical Knowledge, Effective Communication Skills, Marketing, Product Operations, Retail MGMT, Database Architecture, Enterprise Integration Methodologies, API Architecture, Low-code No-code Application Experience, Data Warehousing, AI Systems</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers organisations to accelerate their business processes.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8378441002</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a03720f6-bc3</externalid>
      <Title>Solutions Architect</Title>
      <Description><![CDATA[<p>As a Solutions Architect at Databricks, you will partner with our customers to design scalable data architectures using Databricks technology and services.</p>
<p>You have technical depth and business knowledge and can drive complex technology discussions which express the value of the Databricks platform throughout the sales lifecycle.</p>
<p>In partnership with our Account Executives, you will engage with our customers&#39; technical leads, including architects, engineers, and operations teams with the goal of establishing yourself as a trusted advisor to achieve tangible outcomes.</p>
<p>You will work with teams across Databricks and our executive leadership to represent your customer&#39;s needs and build valuable customer engagements and report to the Field Engineering Manager.</p>
<p>The impact you will have:</p>
<ul>
<li>Work with Sales and other essential partners to develop account strategies for your assigned accounts to grow their usage of the platform.</li>
</ul>
<ul>
<li>Establish the Databricks Lakehouse architecture as the standard data architecture for customers through excellent technical account planning.</li>
</ul>
<ul>
<li>Build and present reference architectures and demo applications for prospects to help them understand how Databricks can be used to achieve their goals to land new users and use cases.</li>
</ul>
<ul>
<li>Capture the technical win by consulting on big data architectures, data engineering pipelines, and data science/machine learning projects; prove out the Databricks technology for strategic customer projects; and validate integrations with cloud services and other 3rd party applications.</li>
</ul>
<ul>
<li>Become an expert in, and promote Databricks inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years in a customer-facing pre-sales, technical architecture, or consulting role with expertise in at least one of the following technologies:</li>
</ul>
<ul>
<li>Big data engineering (Ex: Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Warehousing &amp; ETL (Ex: SQL, OLTP/OLAP/DSS)</li>
</ul>
<ul>
<li>Data Science and Machine Learning (Ex: pandas, scikit-learn, HPO)</li>
</ul>
<ul>
<li>Data Applications (Ex: Logs Analysis, Threat Detection, Real-time Systems Monitoring, Risk Analysis and more)</li>
</ul>
<ul>
<li>Experience translating a customer&#39;s business needs to technology solutions, including establishing buy-in with essential customer stakeholders at all levels of the business.</li>
</ul>
<ul>
<li>Experienced at designing, architecting, and presenting data systems for customers and managing the delivery of production solutions of those data architectures.</li>
</ul>
<ul>
<li>Fluent in SQL and database technology.</li>
</ul>
<ul>
<li>Debug and development experience in at least one of the following languages: Python, Scala, Java, or R.</li>
</ul>
<ul>
<li>Desired: Built solutions with public cloud providers such as AWS, Azure, or GCP</li>
</ul>
<ul>
<li>Desired: Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)</li>
</ul>
<ul>
<li>Travel to customers in your region up to 30% of the time.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$164,500-$224,000 CAD</Salaryrange>
      <Skills>Big data engineering, Data Warehousing &amp; ETL, Data Science and Machine Learning, Data Applications, SQL and database technology, Python, Scala, Java, or R, Built solutions with public cloud providers such as AWS, Azure, or GCP, Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5898477002</Applyto>
      <Location>Toronto, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>19182c1d-b27</externalid>
      <Title>Solutions Architect - UAE</Title>
      <Description><![CDATA[<p>At Databricks, our core values are at the heart of everything we do; creating a culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone.</p>
<p>We aim to inspire our customers to make informed decisions that push their business forward. We provide a user-friendly and intuitive platform that makes it easy to turn insights into action and fosters a culture of creativity, experimentation, and continuous improvement.</p>
<p>As a Solutions Architect in the UAE Pre-Sales team, you will be an essential part of this mission, using your technical expertise to demonstrate how our Data Intelligence Platform can help customers solve their complex data challenges.</p>
<p>You&#39;ll work with a collaborative, customer-focused team that values innovation and creativity, using your skills to create customised solutions to help our customers achieve their goals and guide their businesses forward.</p>
<p>Join us in our quest to change how people work with data and make a better world!</p>
<p>The impact you will have:</p>
<ul>
<li>Create impactful and successful relationships with customer accounts in the United Arab Emirates, providing technical and business value to Databricks customers in collaboration with the extended team.</li>
</ul>
<ul>
<li>Become the trusted advisor of your customer on the Data and AI landscape by successfully driving and delivering the adoption of the Databricks Data Intelligence Platform.</li>
</ul>
<ul>
<li>Enabling Partners and support internal events in the MEA region.</li>
</ul>
<ul>
<li>Scale best practices in your field by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>
</ul>
<ul>
<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Experienced in customer interactions in a technical pre-sales capacity and adept in managing complex sales lifecycles.</li>
</ul>
<ul>
<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences, requiring an ability to switch context and/or levels of technical depth.</li>
</ul>
<ul>
<li>Ability to provide technical solutions for specialised customer needs, navigate a competitive landscape and effectively develop relationships to achieve long-term customer success.</li>
</ul>
<ul>
<li>Hands-on expertise with complex Big Data architecture design for public cloud platform(s) solutions, focusing on use cases in Data Warehousing and Data Engineering architecture and implementation.</li>
</ul>
<p>Data Science and Machine Learning skills will be advantageous.</p>
<ul>
<li>Prior experience with coding in a core programming language (i.e., Python, SQL etc.) and willingness to learn Apache Spark™.</li>
</ul>
<ul>
<li>Experience and skills on the Databricks platform will be highly advantageous for the role!</li>
</ul>
<ul>
<li>Excellent communication skills in English required as a minimum. Fluency in Arabic will be highly preferable for the position.</li>
</ul>
<p>Key Notes:</p>
<ul>
<li>Location for the role will be in Paris (i.e. within a commutable distance for a hybrid schedule).</li>
</ul>
<ul>
<li>You will need to be flexible and willing to travel to the United Arab Emirates for customer visits on a regular basis (i.e. up to ~2 weeks per month).</li>
</ul>
<ul>
<li>We are seeking a candidate that will be interested in a future relocation to the region (Dubai) when an office is opened.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>customer interactions, technical pre-sales capacity, complex sales lifecycles, use case discovery, solution architecture designs, Big Data architecture design, public cloud platform(s), Data Warehousing, Data Engineering, Apache Spark, Python, SQL, Data Science, Machine Learning, Arabic</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8287419002</Applyto>
      <Location>Paris, France</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>02ba8342-079</externalid>
      <Title>Specialist Solutions Architect - Data Warehousing (Healthcare &amp; Life Sciences)</Title>
      <Description><![CDATA[<p>As a Specialist Solutions Architect (SSA) - Data Warehousing, you will guide customers in their cloud data warehousing transformation with Databricks. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with large-scale data warehousing technologies and lakehouse architecture.</p>
<p>The SSA helps customers through evaluations and successful production planning for their business intelligence workloads while aligning their technical roadmap for the Databricks Data Intelligence Platform.</p>
<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in the data warehousing specialty - including performance tuning, data modeling, winning evaluations, architecture design, and production migration planning.</p>
<p>The impact you will have:</p>
<ul>
<li>Provide technical leadership to guide strategic customers to successful cloud transformations on large-scale data warehousing workloads - ranging from evaluation to architecture design to production deployment</li>
<li>Prove the value of the Databricks Intelligence Platform for customer workloads by architecting production workloads, including end-to-end pipeline load performance testing and optimization</li>
<li>Become a technical expert in an area such as data warehousing evaluations or helping set up successful workload migrations</li>
<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing and performance, and tuning workloads for production</li>
<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>
<li>Contribute to the Databricks Community</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years experience in a technical role with expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, data governance, debugging MPP data warehouses or other big data solutions, or migration workloads from EDW other systems</li>
<li>Experience with design and implementation of data warehousing technologies including relational databases, SQL, data analytics, NoSQL, MPP, OLTP, and OLAP</li>
<li>Deep Specialty Expertise in at least one of the following areas:</li>
</ul>
<p>+ Experience scaling large analytical data workloads in the cloud that are performant and cost-effective 	+ Maintained, extended, or migrated a production data warehouse system to evolve with complex needs, including data modeling, data governance needs, and integration with business intelligence tools 	+ Experience migrating on-premise EDW workloads to the public cloud</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
<li>Production programming experience in SQL and Python, Scala, or Java</li>
<li>Experience with the AWS, Azure, or GCP clouds</li>
<li>2 years professional experience with data warehousing and big data technologies (Ex: SQL, Redshift, SAP, Synapse, EMR, OLAP &amp; OLTP workloads)</li>
<li>2 years customer-facing experience in a pre-sales or post-sales role</li>
<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>
<li>Can travel up to 30% when needed</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>data warehousing, cloud data warehousing, Databricks, lakehouse architecture, SQL, Python, Scala, Java, AWS, Azure, GCP, data analytics, NoSQL, MPP, OLTP, OLAP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8337429002</Applyto>
      <Location>Northeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1aad838f-387</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>
<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>
<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>
<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>
</ul>
<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p>To be successful in this role, you&#39;ll need:</p>
<ul>
<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>
<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>
<li>Deep experience with at least one of:</li>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>
<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>
<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>
<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>
<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>
<li>Experience working in fintech, financial services, or highly regulated environments.</li>
<li>Security engineering background with focus on data protection and access controls.</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>
<li>Storage: GCS, S3.</li>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>
<li>Languages: Python, Go, SQL.</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba30b234-c68</externalid>
      <Title>Senior Data Engineer, Payments</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Engineer to join our Payments team. As a critical part of our operations, you&#39;ll handle data related to compliance with Tax, Payments, and Legal regulations. You&#39;ll design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds.</p>
<p>Your work will involve developing data models that enable the efficient analysis and manipulation of data for merchandising optimization, ensuring data quality, consistency, and accuracy. You&#39;ll also develop high-quality data assets for product use-cases by partnering with Product, AI/ML, and Data Science teams.</p>
<p>As a Senior Data Engineer, you&#39;ll contribute to creating standards and best practices for Airbnb&#39;s Data Engineering and shape the tools, processes, and standards used by the broader data community. You&#39;ll collaborate with cross-functional teams to define data requirements and deliver data solutions that drive merchandising and sales improvements.</p>
<p>To succeed in this role, you&#39;ll need 6+ years of relevant industry experience, a BE/B.Tech in Computer Science or a relevant technical degree, and hands-on experience in DSA coding, data structure, and algorithm. You&#39;ll also need extensive experience designing, building, and operating robust distributed data platforms and handling data at the petabyte scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scala, Python, data processing technologies, query authoring (SQL), ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue), data warehousing concepts, relational databases (PostgreSQL, MySQL), columnar databases (Redshift, BigQuery, HBase, ClickHouse)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7256787</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ebf95cea-76b</externalid>
      <Title>Technical Escalation Manager</Title>
      <Description><![CDATA[<p>As a Technical Escalation Manager at Databricks, you will be responsible for coordinating efforts to resolve critical customer issues, customer-impacting situations, and major incidents. You will work with multiple internal teams (engineering, product management, Customer Success Engineering, and Support) and external partners to effectively resolve these customer-impacting situations.</p>
<p>Your key responsibilities will include:</p>
<ul>
<li>Managing support escalation in partnership with engineering, product management, Customer Success Engineering, Support, Customers, and Partners until resolution.</li>
<li>Achieving customer satisfaction by ensuring incidents or escalations (and related cases) are well and fully documented with the timely execution of action items.</li>
<li>Creating and executing a data-driven customer recovery plan for every escalation and incident that is addressed.</li>
<li>Utilizing business and technical skills to manage customer escalations, coordinate meetings and deliverables, and analyze trends and patterns for reporting purposes.</li>
<li>Using data, metrics, and feedback to inform operational and tactical decisions that improve incident and escalation management.</li>
<li>Coordinating all necessary resources to fast-track and resolve new incidents and escalations from customers with a clear and detailed plan.</li>
</ul>
<p>We are looking for a candidate with a minimum of 8+ years of experience in customer support, escalation, SRE, or incident management. You should have excellent contextual interpretation and writing skills, as well as the ability to effectively summarize and communicate to both technical and business audiences.</p>
<p>You will also need experience with a &#39;Distributed big data Computing&#39; environment, SQL-based databases, as well as data warehousing and ETL technologies such as Informatica, DataStage, Oracle, Teradata, SQL Server, and MySQL. Linux/Unix administration skills, networking, and Hands-on Cloud experience with AWS, Azure, or GCP are required.</p>
<p>Experience working cross-functionally with support, engineering, product management, and directly with customers; ability to deeply understand product and customer personas is also essential.</p>
<p>A Bachelor&#39;s or Master&#39;s degree in Computer Science or Computer Engineering, or related Engineering field is preferred. Written and spoken proficiency in both Japanese and English is also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>customer support, escalation, SRE, incident management, distributed big data computing, SQL-based databases, data warehousing, ETL technologies, Linux/Unix administration, networking, cloud experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and operates the world&apos;s leading data and AI infrastructure platformاساس enabling customers to leverage deep data insights and enhance their business.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8407911002</Applyto>
      <Location>Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1be89b3c-bc1</externalid>
      <Title>Staff Analytics Engineer</Title>
      <Description><![CDATA[<p>We are currently hiring for multiple teams:</p>
<p>Foundational Data team: Our mission in the Foundational Data team is to build and maintain high-quality datasets frequently used across all of Airbnb. We set company-wide standards that decide how locations are grouped into regions, visitors are measured based upon site traffic, bot traffic is separated from organic traffic, and cloud costs are attributed to Airbnb services. This data is used to build public financial reports, drive strategic marketing decisions, and manage operational costs.</p>
<p>AirCover Data Foundation: The AirCover Data Foundation team is responsible for providing trustworthy, consistent data and metrics to facilitate business insights, informed decision-making, and seamless operations across Airbnb&#39;s AirCover programs, such as Guest Travel Insurance, AirCover for Hosts, and AirCover for Guests.</p>
<p>As a Staff Analytics Engineer, you will bring a unique lens to our data strategy and provide in-depth technical mentorship and leadership to the team. We are looking for someone with expertise in data modeling, metric development, and large-scale distributed data processing frameworks like Presto or Spark.</p>
<p>Leveraging our internal, top-tier data tooling alongside other resources, you will empower both technical and non-technical teams across Airbnb to utilize our data for making decisions grounded in evidence. Staff-level engineers are expected to do this with a minimal amount of supervision. We value innovative thinkers who consistently seek smarter and more efficient solutions while managing daily operations, deadlines, and collaborating with team members.</p>
<p>A Typical Day:</p>
<ul>
<li>Develop high-quality data assets to satisfy a wide range of use-cases</li>
<li>Develop frameworks and tools to scale insight generation to meet critical business and infrastructure requirements</li>
<li>Collaborate and build strong partnerships with other data practitioners throughout Airbnb</li>
<li>Influence the trajectory of data in decision making</li>
<li>Improve trust in our data by championing for data quality across the stack</li>
</ul>
<p>Your Expertise:</p>
<ul>
<li>9+ years of experience with a BS/Masters or 6+ years with a PhD</li>
<li>Fluent in SQL and proficient in at least one data engineering language, such as Python or Scala</li>
<li>Expertise using business intelligence and reporting tools like Superset and Tableau</li>
<li>Expertise in large-scale distributed data processing frameworks like Presto or Spark</li>
<li>Expertise in data modeling for data warehouses and/or metrics repositories</li>
<li>Experience with an ETL framework like Airflow</li>
<li>Clear and mature communication skills: ability to distill complex ideas for technical and non-technical stakeholders</li>
<li>Ability to provide technical leadership and mentorship, guiding teams on best practices and contributing to the development of analytic engineering strategies</li>
<li>Experience exploring and leveraging LLM AI’s in everyday tasks (coding, documentation, etc…)</li>
<li>Strong capability to forge trusted partnerships across working teams</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Scaling data tasks via automation</li>
<li>Previous experience in large-scale cloud-based software engineering or system architecture</li>
<li>Experience with AB experimentation</li>
<li>Familiarity with AI/ML algorithms, including their dependencies on data, as well as their respective strengths and limitations</li>
<li>Designing and/or leveraging high-quality data visualization tools</li>
</ul>
<p>Your Location: This position is US - Remote Eligible. The role may include occasional work at an Airbnb office or attendance at offsites, as agreed to with your manager. While the position is Remote Eligible, you must live in a state where Airbnb, Inc. has a registered entity. Click here for the up-to-date list of excluded states.</p>
<p>Our Commitment To Inclusion &amp; Belonging: Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions. All qualified individuals are encouraged to apply. We strive to also provide a disability inclusive application and interview process. If you are a candidate with a disability and require reasonable accommodation in order to submit an application, please contact us at: reasonableaccommodations@airbnb.com.</p>
<p>How We&#39;ll Take Care of You: Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits. Pay Range $194,000-$240,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$194,000-$240,000 USD</Salaryrange>
      <Skills>SQL, Python, Scala, Presto, Spark, Superset, Tableau, ETL, Airflow, Data Modeling, Data Warehousing, Metrics Repositories, LLM AI, AI/ML Algorithms, Data Visualization, Cloud-Based Software Engineering, System Architecture, AB Experimentation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most well-known travel companies in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7733495</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e2f537b7-0f0</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems with the Databricks Data Intelligence Platform.</p>
<p>As a Delivery Solutions Architect (DSA), you are a trusted technical advisor to key customers, providing expert guidance that translates data, analytics, and AI challenges into high-impact business value.</p>
<p>You help design, implement, and scale data and AI solutions, focusing on architecture, operational excellence, and customer enablement.</p>
<p>Internally, you will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks Platform in your customers.</p>
<p>Delivery Solutions Architects (DSAs) are trusted technical advisors embedded within the customer organization, providing expert guidance that translates data and AI challenges into high-impact business value.</p>
<p>They help you design, implement, and scale data and AI solutions, focusing on architecture, operational excellence, and team enablement.</p>
<p>DSAs focus on:</p>
<ul>
<li>Designing secure, scalable architecture</li>
</ul>
<ul>
<li>Aligning people, processes, and technology</li>
</ul>
<ul>
<li>Establishing trusted advisor relationships</li>
</ul>
<ul>
<li>Leveraging the broader ecosystem of Databricks experts</li>
</ul>
<p>This is a hybrid technical and commercial role.</p>
<p>Technically, the expectations are that you become the post-sales technical lead and trusted advisor across all Databricks products for the customer&#39;s top priority use cases.</p>
<p>This requires you to use your technical skills and credibility to engage and communicate with technical/technical leadership stakeholders in our customer organizations, do architecture reviews, help with performance and cost optimizations, demonstrate new capabilities, remove blockers, etc.</p>
<p>In parallel, it is commercial in the sense that you will drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, building executive relationships, orchestrating other focused/specialized teams within Databricks, and creating and driving onboarding plans.</p>
<p>While not a hands-on-keyboard role, this is a highly technical position where architectural skills in fields such as Data Architecture, Data Engineering, Data Warehousing, or Data Science are essential.</p>
<p>You will report directly to a DSA Manager within the Field Engineering organization.</p>
<p>The impact you will have:</p>
<ul>
<li>Be the Databricks Architect working with customer technical teams working on use cases/data products, from development to go-live, addressing any technical challenges and blockers and providing guidance, best practices, and enablement</li>
</ul>
<ul>
<li>Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts</li>
</ul>
<ul>
<li>Be the internal point of contact for any questions related to production/go live status of agreed-upon use cases within an account, often for multiple use cases within the largest and most complex organizations</li>
</ul>
<ul>
<li>Leverage both Shared Services, User Education, Onboarding/Technical Services, and Support resources, along with escalating to expert-level technical teams to address the tasks that are beyond your scope of activities or expertise</li>
</ul>
<ul>
<li>Create and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals</li>
</ul>
<ul>
<li>Navigate Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs</li>
</ul>
<ul>
<li>Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams:</li>
</ul>
<ul>
<li>Main use cases moving from &#39;win&#39; to production</li>
</ul>
<ul>
<li>Enablement/user growth plan</li>
</ul>
<ul>
<li>Product adoption (strategy and activities to increase adoption of Databricks&#39; Lakehouse vision)</li>
</ul>
<ul>
<li>Organic needs for current investment (e.g., cloud cost control, tuning &amp; optimization)</li>
</ul>
<ul>
<li>Executive and operational governance</li>
</ul>
<ul>
<li>Provide internal and external updates</li>
</ul>
<ul>
<li>KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption, and use case progression</li>
</ul>
<ul>
<li>to your Technical GM</li>
</ul>
<ul>
<li>Navigate Databricks Product and Engineering teams for new product innovations, private previews, and upgrade needs, presenting them to the customers when applicable for their ongoing developments</li>
</ul>
<ul>
<li>internal and external updates</li>
</ul>
<ul>
<li>KPI reporting on the status of usage and customer health, risks, and blockers, product adoption, and use case progression</li>
</ul>
<ul>
<li>to your Field Engineering leadership</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6-10 years of experience where you have been accountable for delivery of projects in Data, Analytics, or AI and where you can contribute to technical debate and design choices with customers</li>
</ul>
<ul>
<li>Programming experience in PySpark, SQL, or Scala</li>
</ul>
<ul>
<li>Understanding and hands-on experience of solution architecture-related distributed data and analytics systems</li>
</ul>
<ul>
<li>Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting roles</li>
</ul>
<ul>
<li>Understanding of how to attribute business value and outcomes to specific project deliverables</li>
</ul>
<ul>
<li>Technical program coordination including account and stakeholder management</li>
</ul>
<ul>
<li>Experience resolving complex and important escalation with senior customer technical stakeholders</li>
</ul>
<ul>
<li>Track record of overachievement against quota, goals, or similar objective targets</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>
</ul>
<ul>
<li>Can travel up to 30%</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics, and AI.</p>
<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake, and MLflow.</p>
<p>To learn more, follow Databricks on Twitter, LinkedIn, and Facebook.</p>
<p>Benefits</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>
<p>Compliance</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>PySpark, SQL, Scala, Data Architecture, Data Engineering, Data Warehousing, Data Science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a platform for unifying and democratizing data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8368003002</Applyto>
      <Location>Remote - Italy</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a84988d7-61a</externalid>
      <Title>Partner Solutions Architect, CEMEA</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect, you will engage with our top Consulting and System Integrator (C&amp;SI) Partners and Field Engineering to drive adoption of the Data Intelligence Platform in our top customers through C-Suite Technical Executive alignment, engagement of Champions, and collaboration with our sales teams in the region.</p>
<p>You will develop ongoing partner capability via the &#39;Technical Champions program&#39; within our top C&amp;SI Partners to support CoE creation and delivery excellence.</p>
<p>You will provide strategic vision related to the Databricks Data Intelligence Platform aligning to the GSI engagement in our top accounts and develop and support programs to promote Partner expertise in the application of the Databricks Data Intelligence Platform.</p>
<p>Reporting to the Director, Field Engineering (Partner Solutions Architect)</p>
<p>The impact you will have:</p>
<p>A Partner Solutions Architect plays a crucial role in the success of the Databricks partner ecosystem by ensuring partners have the technical knowledge and experience to build, maintain and grow successful solutions for their customers.</p>
<p>This, in turn, drives the adoption and success of the Databricks Data Intelligence Platform and the Partner solutions in the market.</p>
<p>To achieve this, the PSA will:</p>
<ul>
<li>Accelerate Partner pre-sales and delivery in joint, strategic customer accounts by aligning Partner and Databricks Resources and providing technical expertise to accelerate adoption and consumption of the platform</li>
</ul>
<ul>
<li>Work closely with Databricks account teams to help our partner ecosystem scope, evaluate and deliver large scale data projects and transformational programmes</li>
</ul>
<ul>
<li>Grow the Partner Databricks delivery capability by providing technical expertise to help design, build and maintain repeatable solutions using Databricks Products and Services</li>
</ul>
<ul>
<li>Develop, maintain and grow Senior Technical Executive relationships to identify new business opportunities, innovative use cases, competition and support joint go to market initiatives.</li>
</ul>
<ul>
<li>The role requires up to 40% travel to GSI Partner sites and the Databricks German offices.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience hands-on as a Data Professional in a modern cloud-based data stack</li>
</ul>
<ul>
<li>At least 3 years experience with technical pre-sales and sales methodologies within a consumption business model</li>
</ul>
<ul>
<li>Collaborate closely with partner organisations at the senior executive level to understand their needs and objectives and align to Databricks products and Services</li>
</ul>
<ul>
<li>Conduct training sessions, workshops, and webinars to educate partners on new technologies, features, and best practices.</li>
</ul>
<ul>
<li>Develop and maintain technical content such as whitepapers, case studies, and solution guides to assist partners in leveraging the Databricks offerings</li>
</ul>
<ul>
<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala)</li>
</ul>
<ul>
<li>Designing, implementing and maintaining end to end Data Architectures for Big Data, Data Warehousing and AI on MPP based platforms</li>
</ul>
<ul>
<li>Managing multiple, frequently changing priorities across multiple teams</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Intelligence Platform, Cloud-based data stack, Technical pre-sales and sales methodologies, Partner organisations, Senior executive level, New technologies, Features, Best practices, Core programming language, Python, Java, Scala, Data Architectures, Big Data, Data Warehousing, AI, MPP based platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8413308002</Applyto>
      <Location>Switzerland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>25f010f0-7d1</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Brex’s AI-native automation and world-class service eliminate manual expense and accounting tasks for customers so they can focus on what matters most. Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry. We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream. We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p>Data at Brex</p>
<p>Our Scientists and Engineers work together to make data , and insights derived from data , a core asset across Brex. But it&#39;s more than just crunching numbers. The Data team at Brex develops infrastructure, statistical models, and products using data. Our work is ingrained in Brex&#39;s decision-making process, the efficiency of our operations, our risk management policies, and the unparalleled experience we provide our customers.</p>
<p>What You’ll Do</p>
<p>As a Data Engineer at Brex, you will be a core contributor in transforming raw data into actionable insights for various departments across the organization. You&#39;ll collaborate closely with Data Scientists, Software Engineers, and business units to create efficient data models, pipelines, and analytics frameworks that drive the business forward. You also play a leading role in the design, implementation, and maintenance of Core Data tables, our high-quality, curated data source for a wide range of analytic applications.</p>
<p>Where you’ll work</p>
<p>This role will be based in our San Francisco office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain data models and pipelines that scale with the growing number of services, products, and changes in the company.</li>
</ul>
<ul>
<li>Collaborate closely with Data Scientists, Data Analysts, and Business teams to understand their data needs, translating them into robust, efficient, scalable data solutions that enable ease of predictive analytics, data analysis, and metrics formulation.</li>
</ul>
<ul>
<li>Maintain data documentation and definitions, building and ensuring that source-of-truth tables remain high quality for data science and reporting applications.</li>
</ul>
<ul>
<li>Develop and enable integration with various data sources, allowing for more data-driven initiatives across the company.</li>
</ul>
<ul>
<li>Apply best practices in data management to ensure the reliability and robustness of data utilized across various analytics applications.</li>
</ul>
<ul>
<li>Set and proliferate company-wide standards for data relating to structure, quality, and expectations.</li>
</ul>
<ul>
<li>Act as a liaison between the technical and non-technical teams, bridging gaps and ensuring that data solutions align with business objectives.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3+ years of experience in Data Engineering, Data Analytics, or a related field such as Analytics Engineering.</li>
</ul>
<ul>
<li>2+ years of experience working with modern data transformation tools like DBT.</li>
</ul>
<ul>
<li>Advanced knowledge of databases and SQL with the ability to efficiently stage, process, and transform data.</li>
</ul>
<ul>
<li>Experience integrating and orchestrating data workflows with various modern data tools and systems.</li>
</ul>
<ul>
<li>Experience with data modeling, ETL/ELT processes, and data warehousing solutions.</li>
</ul>
<ul>
<li>Experience working with a data warehouse such as Snowflake.</li>
</ul>
<ul>
<li>Experience with a data workflow orchestrator tool such as Airflow.</li>
</ul>
<ul>
<li>Experience with a programming language such as Python.</li>
</ul>
<ul>
<li>Familiarity with BI tools such as Looker, Tableau, or similar platforms is a plus.</li>
</ul>
<ul>
<li>Exceptional quantitative and analytical skills.</li>
</ul>
<ul>
<li>Strong communication skills and ability to collaborate with various stakeholders, both technical and non-technical.</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $120,800 - $151,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$120,800 - $151,000</Salaryrange>
      <Skills>DBT, databases, SQL, data modeling, ETL/ELT processes, data warehousing solutions, Snowflake, Airflow, Python, BI tools, Looker, Tableau</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8366850002</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0c456364-565</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>As a Delivery Solutions Architect at Databricks, you will be a trusted technical advisor embedded within the customer organisation. You will work closely with sales and field engineering to accelerate adoption and growth of the Databricks platformقت You will ensure customer success by providing technical accountability for our most complex customers,helping them maximise the value of Databricks workloads they have already selected and improving their return on investment.</p>
<p>This role blends deep technical leadership with strategic customer engagement. You will own the post-sales technical strategy for the customer’s highest-value use cases and serve as their primary advisor across the Databricks platform.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Being the accountable Databricks Architect for your assigned customers, working with technical teams to guide priority use cases from design through go-live,removing blockers, providing best practices, and ensuring stable, scalable adoption.</li>
<li>Leading the post-technical-win strategy and execution plan for major Databricks use cases, aligning with Solutions Architects to understand full demand plans and drive clarity across multiple selling teams and stakeholders.</li>
<li>Owning the technical leadership of assigned use cases, creating certainty from ambiguity and coordinating onboarding, enablement, success, go-live, and healthy consumption of workloads selected for Databricks.</li>
<li>Serving as the first point of contact for production/go-live status, often across multiple complex use cases within large enterprise organisations.</li>
<li>Orchestrating the broader Databricks ecosystem,Shared Services, User Education, Onboarding/Technical Services, Support, and specialist technical teams,to ensure high-quality delivery and escalate advanced issues when needed.</li>
<li>Creating and executing a point of view for accelerating use cases into production, collaborating with Professional Services on proposals as needed.</li>
<li>Partnering with Product and Engineering to introduce new capabilities, private previews, and upgrade paths that support customer roadmaps.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Programming experience in Python, SQL, or Scala, and a solid understanding of distributed data systems.</li>
<li>5+ years of experience delivering Data, Analytics, or AI projects, with the ability to contribute to architectural discussions with customers.</li>
<li>Experience in customer-facing technical roles such as technical architecture, pre-sales, consulting, or customer success.</li>
<li>Ability to guide architectural decisions in domains such as data engineering, data architecture, data warehousing, or data science.</li>
<li>Demonstrated ability to drive delivery outcomes without hands-on keyboard responsibilities.</li>
<li>Experience resolving complex escalations with senior customer stakeholders.</li>
<li>Understanding of how to connect technical deliverables to business value.</li>
<li>Track record of achieving or exceeding goals or objectives.</li>
<li>Bachelor’s degree in Computer Science, Information Systems, Engineering, or equivalent experience.</li>
<li>Fluency in English is required; French or German language skills are a plus.</li>
<li>Ability to travel up to 30%.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Scala, Distributed data systems, Data engineering, Data architecture, Data warehousing, Data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It has over 10,000 customers worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8309177002</Applyto>
      <Location>Zürich, Switzerland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1d204fa1-067</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Data at Brex</p>
<p>Our Scientists and Engineers work together to make data , and insights derived from data , a core asset across Brex. But it&#39;s more than just crunching numbers. The Data team at Brex develops infrastructure, statistical models, and products using data. Our work is ingrained in Brex&#39;s decision-making process, the efficiency of our operations, our risk management policies, and the unparalleled experience we provide our customers.</p>
<p>What You’ll Do</p>
<p>As a Data Engineer at Brex, you will be a core contributor in transforming raw data into actionable insights for various departments across the organization. You&#39;ll collaborate closely with Data Scientists, Software Engineers, and business units to create efficient data models, pipelines, and analytics frameworks that drive the business forward. You also play a leading role in the design, implementation, and maintenance of Core Data tables, our high-quality, curated data source for a wide range of analytic applications.</p>
<p>Where you’ll work</p>
<p>This role will be based in our Seattle office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain data models and pipelines that scale with the growing number of services, products, and changes in the company.</li>
</ul>
<ul>
<li>Collaborate closely with Data Scientists, Data Analysts, and Business teams to understand their data needs, translating them into robust, efficient, scalable data solutions that enable ease of predictive analytics, data analysis, and metrics formulation.</li>
</ul>
<ul>
<li>Maintain data documentation and definitions, building and ensuring that source-of-truth tables remain high quality for data science and reporting applications.</li>
</ul>
<ul>
<li>Develop and enable integration with various data sources, allowing for more data-driven initiatives across the company.</li>
</ul>
<ul>
<li>Apply best practices in data management to ensure the reliability and robustness of data utilized across various analytics applications.</li>
</ul>
<ul>
<li>Set and proliferate company-wide standards for data relating to structure, quality, and expectations.</li>
</ul>
<ul>
<li>Act as a liaison between the technical and non-technical teams, bridging gaps and ensuring that data solutions align with business objectives.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3+ years of experience in Data Engineering, Data Analytics, or a related field such as Analytics Engineering.</li>
</ul>
<ul>
<li>2+ years of experience working with modern data transformation tools like DBT.</li>
</ul>
<ul>
<li>Advanced knowledge of databases and SQL with the ability to efficiently stage, process, and transform data.</li>
</ul>
<ul>
<li>Experience integrating and orchestrating data workflows with various modern data tools and systems.</li>
</ul>
<ul>
<li>Experience with data modeling, ETL/ELT processes, and data warehousing solutions.</li>
</ul>
<ul>
<li>Experience working with a data warehouse such as Snowflake.</li>
</ul>
<ul>
<li>Experience with a data workflow orchestrator tool such as Airflow.</li>
</ul>
<ul>
<li>Experience with a programming language such as Python.</li>
</ul>
<ul>
<li>Familiarity with BI tools such as Looker, Tableau, or similar platforms is a plus.</li>
</ul>
<ul>
<li>Exceptional quantitative and analytical skills.</li>
</ul>
<ul>
<li>Strong communication skills and ability to collaborate with various stakeholders, both technical and non-technical.</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $120,800 - $151,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$120,800 - $151,000</Salaryrange>
      <Skills>DBT, databases, SQL, data modeling, ETL/ELT processes, data warehousing solutions, Snowflake, Airflow, Python, BI tools, Looker, Tableau</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial technology company that provides corporate cards and banking services to businesses.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8510493002</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fe0d53c0-05e</externalid>
      <Title>Delivery Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Lakehouse platform. As a Delivery Solutions Architect (DSA), you will play a critical role during this journey. The DSA works across a small number of our largest or highest potential key accounts, collaborating across Databricks teams to accelerate the adoption and growth of the Databricks platform.</p>
<p>As a DSA, you will help ensure customer success by driving focus and technical accountability to our most complex customers who need guidance to accelerate consumption on Databricks workloads that they have already selected. This is a hybrid technical and commercial role. It is commercial in the sense that you will be required to own and drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, owning executive relationships and creating and driving plans and strategies for Databricks colleagues to execute upon.</p>
<p>This is in parallel to being technical, with expectations being that you become at least Level 200 across all Databricks products/workloads and that you become the Use Case-specific technical lead post Technical Win. You will bring strong executive relationship management skills and high levels of technical credibility to effectively engage and communicate at all levels with an organization, in particular with a track record of building strong relationships with the customers&#39; executives and C-suite, elevating the conversation, and helping them realize the value of Databricks.</p>
<p>You will report directly to a Director, Field Engineering, as part of your Business Unit&#39;s Technical GM organization. You will play a key role in establishing the fundamental assets and best practices within the DSA team, mentoring other DSAs and wider account team members within your region, helping them develop personally, professionally and to further their careers.</p>
<p>The impact you will have:</p>
<ul>
<li>Engage with the Solutions Architect to understand the full Use Case Demand Plan for prioritized customers.</li>
<li>Own the Post-Technical Win technical account strategy and investment plan for the majority of Databricks Use Cases within our most strategic accounts.</li>
<li>Be the accountable technical leader assigned to specific Use Cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty/ambiguity and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks.</li>
<li>Be the first point of contact for any technical issues or questions related to production/go live status of agreed upon Use Cases within an account, oftentimes services multiple use cases within the largest and most complex organizations.</li>
<li>Leverage both Shared Services of User Education, Onboarding/Technical Services and Support resources, along with escalating to Level 400/500 technical experts (Specialist Solution Architects and Product Specialists) to execute on the right tasks that are beyond your scope of activities or expertise.</li>
<li>Create, own and execute a PoV as to how key use cases can be accelerated into production, bringing EM/PM in to prepare Professional Services proposals.</li>
<li>Navigate Databricks Product and Engineering teams for New Product Innovations, Private Previews and Upgrade needs (DBR, E2 and Unity Catalog).</li>
<li>Build and maintain an executive level as well as a detailed programme level success plan that covers all activities of Customer, PS, Partner, SSA, Product Specialist, SA to cover the below workstreams:</li>
</ul>
<ul>
<li>Key use cases moving from &#39;win&#39; to production</li>
<li>Enablement / user growth plan</li>
<li>Product adoption (strategy and activities to increase adoption of LH vision)</li>
<li>Organic needs for current investment Eg. Cloud Cost control, Tuning &amp; Optimization</li>
<li>Executive and operational governance</li>
<li>Proactively provide internal and external updates</li>
<li>KPI reporting on the status of consumption and customer health, covering investment status, key risks, product adoption and use case progression to your Technical GM</li>
<li>Development of reusable and scalable assets and mentorship of junior team members to establish the DSA team</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Engineering technologies (e.g. Spark, Hadoop, Kafka), Data Warehousing (e.g. SQL, OLTP/OLAP/DSS), Data Science and Machine Learning technologies (e.g. pandas, scikit-learn, HPO), Executive disciplinary management, Influencing and leading teams, Strategic Management Consulting, Building and steering to a value case, Quota ownership, achievement and track record of great performance against objective target, Proficient in both Korean and English (Native level Korean and Business level English) verbally and in writing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. The company was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8482406002</Applyto>
      <Location>Seoul, South Korea</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e9ccab76-2de</externalid>
      <Title>Solutions Consultant-Northeast</Title>
      <Description><![CDATA[<p>As a Solutions Consultant-Northeast, you will leverage your professional tenure, robust business acumen, and in-depth technical expertise to guide prospective customers through their Airtable evaluation and expansion. You will partner across lines of business as you lead the technical evaluation and compel decision-makers to choose Airtable as their critical operational workflow management tool.</p>
<p>Our mission is to fundamentally improve our customers&#39; quality of life by empowering organisations and the individuals that drive them to reimagine the way they operate, collaborate, and innovate in Airtable.</p>
<p>We sit at the confluence of Sales, Services, Product, and Marketing and take a customer-centric approach to creating value through intentful listening, storytelling, expert guidance, tailored solutions, and applying a highly consultative process.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating with Pre-Sales teams to identify, qualify, and drive revenue opportunities at Enterprise accounts</li>
<li>Building meaningful relationships with and serving as a trusted advisor to a prospect or customer&#39;s technical teams</li>
<li>Owning the technical evaluation process, showcasing customer-centric value within the context of the customer&#39;s unique requirements, workflows, and business process needs</li>
<li>Translating the value of Airtable&#39;s tooling into the language of the customer</li>
<li>Managing multiple sales cycles simultaneously, proactively communicating with stakeholders and prioritising effectively</li>
<li>Consulting with credibility, bringing a well-rounded and unique perspective with hands-on cross-functional experience</li>
<li>Informing the product team and ultimately the product roadmap with meaningful market potential findings</li>
<li>Representing Airtable in every engagement with the utmost care</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years of experience in SaaS Pre-Sales</li>
<li>7+ years industry experience</li>
<li>Experience managing sales cycles and owning the technical evaluation</li>
<li>Working knowledge of Marketing, Product Operations, Retail Management, and other Business Units</li>
<li>Knowledge of business operations, including understanding value drivers for various personas and industries</li>
<li>Understanding best practices for structuring data in a relational database and database architecture</li>
<li>Passionate about creating unique solutions for complex business problems</li>
<li>Loves the spotlight, being an adaptable and compelling communicator and presenter</li>
<li>Thirst for knowledge, being a researcher, analyst, and emerging trends enthusiast</li>
<li>Technically savvy, demonstrating a deep understanding of enterprise integration methodologies, best practices, and API architecture</li>
<li>Low-code, no-code application experience throughout the customer lifecycle</li>
<li>Knowledge of complementary technologies and products, including data warehousing and AI systems</li>
</ul>
<p>We value passion, creativity, autonomy, purposefulness, time management, and curiosity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS Pre-Sales, Technical Evaluation, Customer-Centric Approach, Data Structuring, Database Architecture, Enterprise Integration Methodologies, API Architecture, Low-Code, No-Code Application Experience, Marketing, Product Operations, Retail Management, Business Operations, Data Warehousing, AI Systems</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers organisations to accelerate their most critical business processes. Over 500,000 organisations, including 80% of the Fortune 100, rely on Airtable.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/7892134002</Applyto>
      <Location>New York, NY; Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c53ecdd3-dc7</externalid>
      <Title>Scale Solution Engineer</Title>
      <Description><![CDATA[<p>As a Scale Solution Engineer at Databricks, you will play a critical role in advising customers during their onboarding process. You will work directly with customers to help them onboard and deploy Databricks in their production environment.</p>
<p>Your impact will be significant, ensuring new customers have an excellent experience by providing technical assistance early in their journey. You will become an expert on the Databricks Platform and guide customers in making the best technical decisions. You will also work directly with multiple customers concurrently to provide technical solutions.</p>
<p>To succeed in this role, you will need:</p>
<ul>
<li>An undergraduate degree or higher in Computer Science, Information Systems, or relevant experience</li>
<li>1+ years experience in a technical role, preferably in the data or cloud field</li>
<li>Knowledge of at least one of the public cloud platforms AWS, Azure, or GCP</li>
<li>Knowledge of a programming language such as Python, Scala, or SQL</li>
<li>Knowledge of end-to-end data analytics workflow</li>
<li>Hands-on professional or academic experience in one or more of the following: Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow), Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake)</li>
<li>Excellent time management and prioritization skills</li>
<li>Excellent written and verbal communication</li>
</ul>
<p>Bonus: Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>public cloud platforms, AWS, Azure, GCP, Python, Scala, SQL, Data Engineering technologies, ETL, DBT, Spark, Airflow, Data Warehousing technologies, Stored Procedures, Redshift, Snowflake, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organisations worldwide rely on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8408817002</Applyto>
      <Location>Costa Rica</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>456f029f-2e2</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>As a Principal Software Engineer on our Go To Market Store (GTM Store) and ZoomInfo Data Platform (ZDP) team, you&#39;ll play a pivotal role in developing ZoomInfo&#39;s next-generation unified data platform.</p>
<p>You&#39;ll architect and implement infrastructure that powers our GraphQL-based federated query system for seamless data access across platforms including BigTable, BigQuery, and Solr+.</p>
<p>This is a unique opportunity to influence the technical direction of ZoomInfo&#39;s core data infrastructure, addressing complex challenges such as data freshness, multi-tenant isolation, and real-time data processing at scale.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable infrastructure for GTM Store and ZDP with sub-second query latency.</li>
<li>Architect and implement metadata-driven GraphQL APIs for dynamic schema generation and query federation.</li>
<li>Develop asynchronous secondary indexing systems for scaling capacity and reducing primary data store load.</li>
<li>Design real-time analytics streaming data pipelines from BigTable to BigQuery.</li>
<li>Develop data mutation and deletion frameworks supporting GDPR compliance and schema evolution.</li>
<li>Implement CDC pipelines and calculated field processing for derived data views.</li>
<li>Build observability and monitoring solutions for real-time issue diagnosis across distributed data systems.</li>
<li>Create batch and streaming data processing workflows for complex relationships at scale.</li>
<li>Collaborate with engineering leaders and product managers to define the technical roadmap.</li>
<li>Mentor engineers and establish best practices for cloud-native data infrastructure development.</li>
<li>Partner with cross-functional teams to address data platform requirements and challenges.</li>
<li>Drive solutions for data freshness, query performance, and system reliability challenges.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or related field (or equivalent experience).</li>
<li>10+ years of software engineering experience building large-scale data platforms.</li>
<li>Expertise with distributed NoSQL databases and data warehousing systems.</li>
<li>Strong experience with Java 8+, Scala, Kotlin, GoLang for data systems development.</li>
<li>Proven experience with GCP or AWS and cloud-native architectures.</li>
<li>Experience with streaming/real-time data processing technologies.</li>
<li>Strong system design skills for architecting multi-tenant, distributed systems.</li>
<li>Hands-on experience with Google Cloud Platform services.</li>
<li>Knowledge of CDC patterns, event sourcing, and streaming architectures.</li>
<li>Experience solving data freshness and consistency challenges in distributed systems.</li>
<li>Background in building observability and monitoring solutions for data platforms.</li>
<li>Familiarity with metadata management and schema evolution.</li>
<li>Experience with Kubernetes for deploying data services.</li>
<li>SQL query optimization and performance tuning expertise.</li>
<li>Experience building GraphQL APIs with federated or metadata-driven schema generation.</li>
<li>Strong problem-solving skills and the ability to debug complex distributed systems issues.</li>
<li>Excellent communication skills for explaining technical decisions to diverse audiences.</li>
<li>Self-directed with the ability to drive initiatives independently while collaborating with teams.</li>
<li>Passion for building reliable, observable, and maintainable systems.</li>
<li>Experience promoting diverse, inclusive work environments.</li>
</ul>
<p>Actual compensation offered will be based on factors such as the candidate’s work location, qualifications, skills, experience and/or training. Your recruiter can share more information about the specific salary range for your desired work location during the hiring process.</p>
<p>We want our employees and their families to thrive. In addition to comprehensive benefits we offer holistic mind, body and lifestyle programs designed for overall well-being. Learn more about ZoomInfo benefits here.</p>
<p>Below is the US base salary for this position. Additional compensation such as Bonus, Commission, Equity and other benefits may also apply.</p>
<p>$163,800-$257,400 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Java 8+, Scala, Kotlin, GoLang, GCP, AWS, cloud-native architectures, streaming/real-time data processing technologies, distributed NoSQL databases, data warehousing systems, metadata management, schema evolution, Kubernetes, SQL query optimization, performance tuning, GraphQL APIs, federated or metadata-driven schema generation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8243004002</Applyto>
      <Location>Remote-US-CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>62b2a5a2-9bd</externalid>
      <Title>Big Data Solutions Architect (Professional Services)</Title>
      <Description><![CDATA[<p>As a Big Data Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>
<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>
<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>
<p>You will report to the regional Manager/Lead.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Working on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>
</ul>
<ul>
<li>Working with engagement managers to scope variety of professional services work with input from the customer</li>
</ul>
<ul>
<li>Guiding strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>
</ul>
<ul>
<li>Consulting on architecture and design; bootstrapping or implementing customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>
</ul>
<ul>
<li>Providing an escalated level of support for customer operational issues</li>
</ul>
<ul>
<li>Working with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>
</ul>
<ul>
<li>Working with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>
</ul>
<p>What we look for:</p>
<ul>
<li>6+ years experience in data engineering, data platforms &amp; analytics</li>
</ul>
<ul>
<li>Strong expertise in data warehousing concepts, architecture, and migration strategies</li>
</ul>
<ul>
<li>Comfortable writing code in either Python, Pyspark or Scala</li>
</ul>
<ul>
<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>
</ul>
<ul>
<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>
</ul>
<ul>
<li>Familiarity with CI/CD for production deployments</li>
</ul>
<ul>
<li>Working knowledge of MLOps</li>
</ul>
<ul>
<li>Design and deployment of performant end-to-end data architectures</li>
</ul>
<ul>
<li>Experience with technical project delivery - managing scope and timelines</li>
</ul>
<ul>
<li>Documentation and white-boarding skills</li>
</ul>
<ul>
<li>Experience working with clients and managing conflicts</li>
</ul>
<ul>
<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>
</ul>
<ul>
<li>Data Science expertise is a nice-to-have</li>
</ul>
<ul>
<li>Travel to customers 10-20% of the time</li>
</ul>
<ul>
<li>Databricks Certification</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data science, cloud technology, Apache Spark, CI/CD, MLOps, data warehousing, migration strategies, Python, Pyspark, Scala, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8482697002</Applyto>
      <Location>Paris, France</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9ce3bb01-4a1</externalid>
      <Title>Scale Solutions Engineer</Title>
      <Description><![CDATA[<p>At Databricks, we aim to empower our customers to solve the world&#39;s most challenging data problems using the Data Intelligence platform. As a Scale Solution Engineer, you will be critical in advising customers during their onboarding. You will work directly with customers to help them onboard and deploy Databricks in their production environment and accelerate Databricks features adoption.</p>
<p>The impact you will have:</p>
<ul>
<li>Ensure new customers have an excellent experience by providing technical assistance early in their journey</li>
<li>Become an expert on the Databricks Platform and guide customers in making the best technical decisions</li>
<li>Work directly with multiple customers concurrently to provide technical solutions</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Undergraduate degree or higher in Computer Science, Information Systems, or relevant experience</li>
<li>3+ years experience in a customer-facing technical role in pre-sales, professional services, consulting or customer success</li>
<li>Experience in one or more of the following:</li>
</ul>
<ul>
<li>Solid understanding of the end-to-end data analytics workflow</li>
<li>Excellent time management and prioritization skills</li>
<li>Knowledge of public cloud platforms AWS, Azure or GCP would be a plus</li>
<li>Knowledge of a programming language - Python, Scala, or SQL</li>
<li>Knowledge of end-to-end data analytics workflow</li>
<li>Hands-on professional or academic experience in one or more of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow)</li>
<li>Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake)</li>
<li>Excellent written and verbal communication, in English and Portuguese</li>
<li>Bonus - Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models).</li>
<li>Databricks certification(s)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks, Data Engineering, Data Warehousing, Python, Scala, SQL, AWS, Azure, GCP, ETL, DBT, Spark, Airflow, Redshift, Snowflake, English, Portuguese, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8391865002</Applyto>
      <Location>Sao Paulo, Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>870b07b0-501</externalid>
      <Title>Partner Solutions Architect</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect (PSA) for India, you will work with the technical and sales team members who work directly with our customers. You will develop &#39;technical champions&#39; within our top Partners, providing enablement on technical matters related to the Databricks product.</p>
<p>Working with our partners, you will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>
<p>As a member of our team, you will exercise and develop expertise in those areas, using open-source projects such as Apache Spark, MLflow, and Delta Lake; and major public cloud infrastructure and services.</p>
<p>The impact you will have:</p>
<ul>
<li>Provide partners with the level of enablement they need to assist their clients in evaluating and adopting Databricks, including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>
</ul>
<ul>
<li>Engage with the partner technical community by leading workshops, seminars, and meet-ups</li>
</ul>
<ul>
<li>You will be a Big Data Analytics expert on aspects of architecture and design and will share this with our partner network</li>
</ul>
<ul>
<li>Show expertise by producing creative technical solutions and blog posts</li>
</ul>
<p>What we look for:</p>
<ul>
<li>8 years of customer-facing experience working with external clients or partners across a variety of industry markets</li>
</ul>
<ul>
<li>Core strength in either data engineering or data science</li>
</ul>
<ul>
<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>
</ul>
<ul>
<li>Experience developing architectures within a public cloud (AWS, Azure, or GCP)</li>
</ul>
<ul>
<li>Hands-on experience in SQL, Python, Scala, or Java</li>
</ul>
<ul>
<li>Expertise in at least one of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (Ex: Apache Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Warehousing (Ex: SQL, OLTP/OLAP/DSS)</li>
</ul>
<ul>
<li>Data Science and Machine Learning technologies (Ex: pandas, scikit-learn, HPO)</li>
</ul>
<p>About Databricks</p>
<p>Databricks is the data and AI company. More than 10,000 organisations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>
<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, MLflow, Delta Lake, SQL, Python, Scala, Java, Data Engineering technologies, Data Warehousing, Data Science and Machine Learning technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organisations worldwide rely on Databricks.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8439182002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>894df3d0-189</externalid>
      <Title>Staff Enablement Program Management</Title>
      <Description><![CDATA[<p>This role involves leading the EMEA regional partner enablement strategy, designing and executing high-impact enablement programs, and translating business priorities into world-class learning experiences. The goal is to equip partners to sell, implement, and deliver Databricks solutions with confidence.</p>
<p>Key responsibilities include defining and executing the EMEA partner enablement strategy, designing and delivering scalable enablement programs, owning the regional enablement roadmap, driving measurable ecosystem growth, and partnering closely with regional sales, partner leadership, marketing, and global enablement teams.</p>
<p>The ideal candidate should have 7+ years of experience designing and scaling learning or enablement programs within enterprise technology, cloud, or data ecosystems, and a solid understanding of modern Data &amp; AI architectures, including data warehousing, data engineering, machine learning, and generative AI concepts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data &amp; AI architectures, data warehousing, data engineering, machine learning, generative AI concepts, program management, cross-functional initiatives, stakeholder management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It has over 10,000 organizations worldwide as clients.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8439174002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e4ad40eb-311</externalid>
      <Title>Partner Enablement Technical Lead</Title>
      <Description><![CDATA[<p>Databricks is a partner-first company. Our exponential growth is fueled by the success of our partner ecosystem. As the Partner Enablement Technical Lead, you will be a key architect of our global partner enablement strategy. You will design, build, and execute high-impact learning programs,ranging from technical readiness to advanced certifications,ensuring our partners can deliver Databricks projects with total confidence.</p>
<p>This role demands a master of strategic planning and operational excellence. You will manage complex global initiatives, collaborate with cross-functional stakeholders, and translate business needs into world-class learning experiences. You are joining a high-performance team that recently tripled our trained partner base in just 12 months, using innovative learning models to transform the Databricks ecosystem at scale.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Design Technical Enablement Programs: Architect comprehensive enablement programs such as Advanced Technical Academies, Delivery Specializations, and technical onboarding to drive partner technical proficiency. In collaboration with Subject Matter Experts, you will define end-to-end learning paths, encompassing curriculum, assessments, and success metrics</li>
</ul>
<ul>
<li>Strategic Alignment: Partner with leadership and partner-facing teams to identify critical skills gaps and translate business requirements into high-impact enablement initiatives.</li>
</ul>
<ul>
<li>Technical Roadmap Leadership: Own the roadmap for technical programs, including large-scale learning events, global workshop schedules, and GTM strategies for enablement.</li>
</ul>
<ul>
<li>Performance Accountability: Drive results for your partner portfolio, maintaining full ownership of key metrics such as training completion and certification targets.</li>
</ul>
<ul>
<li>Data-Driven Insights: Monitor and analyze KPIs (Certification rates, ROI, and program impact) to continuously optimize enablement effectiveness.</li>
</ul>
<ul>
<li>Quality Assurance &amp; Coaching: Oversee the technical integrity of programs launched across the team. Evaluate and coach cross-functional training resources to ensure &#39;best-in-class&#39; delivery.</li>
</ul>
<p><strong>Minimum Qualifications:</strong></p>
<ul>
<li>Education: Bachelor&#39;s degree in a technical discipline or equivalent practical experience.</li>
</ul>
<ul>
<li>Experience: 7+ years of experience developing and scaling technical learning programs within a global enterprise tech ecosystem.</li>
</ul>
<ul>
<li>Technical Depth: Deep understanding of Data &amp; AI technologies (Data Warehousing, Data Transformation, ML, and Generative AI); ideally holds at least one industry certification in these areas.</li>
</ul>
<ul>
<li>Consulting Background: Proven experience as part of a technical delivery team, successfully implementing Data &amp; AI projects for external clients.</li>
</ul>
<ul>
<li>Ecosystem Knowledge: Strong understanding of the System Integrator (SI) business model and how technical partners successfully go to market.</li>
</ul>
<ul>
<li>Program Management: Proven track record of managing technical competency models and learning journeys for large, geographically dispersed teams.</li>
</ul>
<ul>
<li>Communication: Exceptional verbal and written communication skills with the ability to influence senior cross-functional stakeholders.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Strong background in the Generative AI landscape.</li>
</ul>
<ul>
<li>Experience scaling technical enablement within &#39;Hyperscaler&#39; environments (AWS, Azure, or GCP) or similar high-growth cloud partner ecosystems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$154,300-$212,200 USD</Salaryrange>
      <Skills>Data &amp; AI technologies, Data Warehousing, Data Transformation, ML, Generative AI, Program Management, Technical Competency Models, Learning Journeys, Cross-Functional Stakeholders, Generative AI landscape, Hyperscaler environments, Cloud partner ecosystems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8435968002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2b92459c-038</externalid>
      <Title>Partner Solutions Architect</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect, you will work with Databricks&#39; Consulting and System Integrator (C&amp;SI) partners, teammates, and with the technical and sales team members who work directly with our customers.</p>
<p>You will develop &#39;technical champions&#39; within our top C&amp;SI Partners, providing enablement on technical matters related to the Databricks product. Working with our partners, you will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Data Intelligence Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>
<p>As a member of our team, you will exercise and develop expertise in those areas, using open-source projects such as Apache Spark, MLflow, and Delta Lake; and major public cloud infrastructure and services.</p>
<p>The impact you will have:</p>
<ul>
<li>Provide partners with the level of enablement they need to assist their clients in evaluating and adopting Databricks, including hands-on Apache Spark programming and integration with the wider cloud ecosystem.</li>
<li>Engage with the partner technical community by leading workshops, seminars, and meet-ups.</li>
<li>You will be a Big Data Analytics expert on aspects of architecture and design and will share this with our partner network.</li>
<li>Show expertise by producing creative technical solutions and blog posts.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of pre-sales or post-sales experience working with external clients or partners across a variety of industry markets.</li>
<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either data engineering or data science.</li>
<li>Experience demonstrating technical concepts, including presenting and whiteboarding.</li>
<li>Experience developing architectures within a public cloud (AWS, Azure, or GCP).</li>
<li>Coding experience in SQL, Python, Scala, or Java.</li>
<li>Expertise in at least one of the following:</li>
<li>Data Engineering technologies (Ex: Apache Spark, Hadoop, Kafka)</li>
<li>Data Warehousing (Ex: SQL, OLTP/OLAP/DSS)</li>
<li>Data Science and Machine Learning technologies (Ex: pandas, scikit-learn, HPO)</li>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.</li>
<li>Written and verbal level fluency in Japanese and English.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Spark, MLflow, Delta Lake, AWS, Azure, GCP, SQL, Python, Scala, Java, Data Engineering, Data Warehousing, Data Science, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organizations worldwide rely on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8347880002</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>35d691cd-c56</externalid>
      <Title>Partner Solutions Architect, CEMEA</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect, you will engage with our top Consulting and System Integrator (C&amp;SI) Partners and Field Engineering to drive adoption of the Data Intelligence Platform in our top customers through C-Suite Technical Executive alignment, engagement of Champions, and collaboration with our sales teams in the region.</p>
<p>You will develop ongoing partner capability via the &#39;Technical Champions program&#39; within our top C&amp;SI Partners to support CoE creation and delivery excellence.</p>
<p>You will provide strategic vision related to the Databricks Data Intelligence Platform aligning to the GSI engagement in our top accounts and develop and support programs to promote Partner expertise in the application of the Databricks Data Intelligence Platform.</p>
<p>Reporting to the Director, Field Engineering (Partner Solutions Architect)</p>
<p>The impact you will have:</p>
<p>A Partner Solutions Architect plays a crucial role in the success of the Databricks partner ecosystem by ensuring partners have the technical knowledge and experience to build, maintain and grow successful solutions for their customers.</p>
<p>This, in turn, drives the adoption and success of the Databricks Data Intelligence Platform and the Partner solutions in the market.</p>
<p>To achieve this, the PSA will:</p>
<ul>
<li>Accelerate Partner pre-sales and delivery in joint, strategic customer accounts by aligning Partner and Databricks Resources and providing technical expertise to accelerate adoption and consumption of the platform</li>
</ul>
<ul>
<li>Work closely with Databricks account teams to help our partner ecosystem scope, evaluate and deliver large scale data projects and transformational programmes</li>
</ul>
<ul>
<li>Grow the Partner Databricks delivery capability by providing technical expertise to help design, build and maintain repeatable solutions using Databricks Products and Services</li>
</ul>
<ul>
<li>Develop, maintain and grow Senior Technical Executive relationships to identify new business opportunities, innovative use cases, competition and support joint go to market initiatives.</li>
</ul>
<ul>
<li>The role requires up to 40% travel to GSI Partner sites and the Databricks German offices.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience hands-on as a Data Professional in a modern cloud-based data stack</li>
</ul>
<ul>
<li>At least 3 years experience with technical pre-sales and sales methodologies within a consumption business model</li>
</ul>
<ul>
<li>Collaborate closely with partner organisations at the senior executive level to understand their needs and objectives and align to Databricks products and Services</li>
</ul>
<ul>
<li>Conduct training sessions, workshops, and webinars to educate partners on new technologies, features, and best practices.</li>
</ul>
<ul>
<li>Develop and maintain technical content such as whitepapers, case studies, and solution guides to assist partners in leveraging the Databricks offerings</li>
</ul>
<ul>
<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala)</li>
</ul>
<ul>
<li>Designing, implementing and maintaining end to end Data Architectures for Big Data, Data Warehousing and AI on MPP based platforms</li>
</ul>
<ul>
<li>Managing multiple, frequently changing priorities across multiple teams</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Intelligence Platform, Cloud-based data stack, Technical pre-sales and sales methodologies, Partner organisations, Senior executive level, Coding in Python, Java, Scala, Data Architectures for Big Data, Data Warehousing and AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8407925002</Applyto>
      <Location>Berlin, Germany; Germany; Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4d67ce9b-156</externalid>
      <Title>Senior Product Manager, Data Enablement</Title>
      <Description><![CDATA[<p>Omada Health is on a mission to inspire and engage people in lifelong health, one step at a time.\n\nAs the Senior Product Manager for Data Enablement, you will be the enterprise-level owner of analytics capabilities, shared definitions, and enablement tooling that ensures Omada can answer critical business questions faster and more efficiently. This role will work across Product, Clinical, Commercial, Operations, and Finance to create the structures for how analytics are governed, accessed, and used.\n\nThe primary mandate of this role is to own the product strategy and roadmap for Omada’s enterprise analytics including governance, shared definitions, and enablement tooling. You are accountable for ensuring that teams across Product, Clinical, Commercial, Operations, and Finance can ask and answer critical business questions quickly, consistently, and with high trust in the underlying data. This means defining how key concepts are governed, how analytics tools are shaped and prioritized, and how cross-functional stakeholders engage with the data platform.\n\nYou will act as the central product owner for Omada’s analytics ecosystem, setting clear standards and driving adoption across the company. In practice, this includes:\n\n* Owning the roadmap for core data products and analytics tooling that power dashboards, reporting, and self-service analysis.\n* Driving the culture of data across our engineering, product and other business teams at Omada\n* Partnering with data, engineering, analytics, and business leaders to ensure analytics investments are aligned with company strategy and deliver measurable impact.\n* Collaborating with other product managers to incorporate data science concepts into their features &amp; experiences\n\nAbout you:\n\n* 10+ years of relevant product management or data product experience (8+ years with Master&#39;s degree, 5+ years with PhD)\n* 7+ years of experience with data technology and data management, including familiarity with:\n  * Modern data warehousing technologies (Redshift, Snowflake, BigQuery or similar)\n  * Cloud technologies, preferably AWS\n  * Business intelligence and analytics tools (Tableau, Amplitude, Looker, or similar)\n  * Data governance frameworks and data quality management\n  * SQL and ability to write queries and QA datasets\n* Subject matter expertise in enterprise analytics governance, data product management, or analytics enablement\n* Experience establishing governance processes &amp; associated tool implementation for data definitions, metrics, or analytics across multiple business functions\n* Proven ability to partner with Data Teams for technical evaluation of self service tools and/or statistical model training lifecycle\n* Proven ability to influence senior stakeholders and reconcile opposing viewpoints to drive consensus\n* Track record of working on problems that are not clearly defined, using advanced knowledge and conceptual thinking to develop solutions\n* Experience with healthcare data standards and regulatory requirements is strongly preferred\n* Bachelor&#39;s degree or equivalent relevant experience (advanced degree preferred)\n\nEssential Competencies:\n\n* Strategic &amp; Analytical:\n  * Able to think beyond constraints to imagine new ways to use data to impact members and customers\n  * Excellent organizational and analytical skills with strong technical understanding\n  * Comfortable analyzing data and leading research/discovery efforts to understand problem spaces, identify opportunities, and propose solutions\n  * Proven ability to set clear, achievable objectives and work plans for complex, cross-functional initiatives\n* Communication &amp; Influence:\n  * Exceptional written and verbal communication skills\n  * Ability to create formal networks with key decision makers and influence stakeholders across multiple functions\n  * Skilled at adapting communication style for different audiences, from technical teams to senior executives\n  * Experience negotiating matters of significance with senior management and/or major customers\n* Collaboration &amp; Problem Solving:\n  * Comfortable with ambiguity and adept at solving problems with limited resources and information\n  * Able to drive alignment among diverse stakeholders with competing priorities\n  * Great listener who asks insightful questions and doesn&#39;t accept &quot;It&#39;s how we&#39;ve always done it&quot; as an answer\n  * Proven ability to reconcile various and opposing stakeholder views to drive results\n* Execution &amp; Delivery:\n  * History of consistently delivering results against target metrics and commitments\n  * Experience managing product backlogs, writing clear product specifications, and driving execution\n  * Ability to exercise autonomous decision making with limited input on methods and techniques to obtain results\n\nYour impact:\n\nIn this role, you will directly contribute to transforming how Omada uses data to serve members, customers, and internal stakeholders. Your work will:\n\n* Eliminate fragmentation: Establish single sources of truth for critical business and clinical metrics, reducing confusion and accelerating decision-making\n* Increase trust: Build confidence in analytics across the organization through governed definitions and transparent processes\n* Accelerate insights: Enable analysts and business users to answer questions faster through improved tooling, AI-assisted analytics, and reusable data products\n* Scale effectively: Create governance frameworks and enablement systems that scale as Omada&#39;s product portfolio and customer base expand\n* Drive outcomes: Ensure that data and analytics capabilities directly support better health outcomes for members and business results for customers\n\nBenefits:\n\n* Competitive salary with generous annual cash bonus\n* Equity grants\n* Remote first work from home culture\n* Flexible Time Off to help you rest, recharge, and connect with loved ones\n* Generous parental leave\n* Health, dental, and vision insurance (and above market employer contributions)\n* 401k retirement savings plan\n* Lifestyle Spending Account (LSA)\n* Mental Health Support Solutions\n* ...and more!\n\nWe take a village to change healthcare. As we build together toward our mission, we strive to embody the following values in our day-to-day work. We hope these hold meaning for you as well as you consider Omada!\n\n* Cultivate Trust. We listen closely and we operate with kindness. We provide respectful and candid feedback to each other.\n* Seek Context. We ask to understand and we build connections. We do our research up front to move faster down the road.\n* Act Boldly. We innovate daily to solve problems, improve processes, and find new opportunities for our members and customers.\n* Deliver Results. We reward impact above output.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data technology, data management, modern data warehousing technologies, cloud technologies, business intelligence and analytics tools, data governance frameworks, data quality management, SQL, subject matter expertise in enterprise analytics governance, data product management, analytics enablement</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a healthcare technology company that provides virtual-first care for pre-diabetes, diabetes, hypertension, and musculoskeletal conditions.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7759052</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>dd7fb909-289</externalid>
      <Title>Web Crawling Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are looking for a skilled and motivated Web Crawling Engineer to join our dynamic engineering team. The ideal candidate should have a solid background in distributed web crawling, scraping and data extraction, with experience using advanced tools and technologies to collect and process large-scale data from diverse web sources at large scale.</p>
<p>Responsibilities</p>
<p>As a Web crawling engineer, you will be responsible for:</p>
<ul>
<li>Developing and maintaining web crawlers using Go to extract data from target websites.</li>
<li>Utilizing headless browsing techniques, such as Chrome DevTools, to automate and optimize data collection processes.</li>
<li>Collaborating with cross-functional teams to identify, scrape, and integrate data from APIs and web pages to support business objectives.</li>
<li>Creating and implementing efficient parsing patterns using tokenizers, regular expressions, XPaths, and CSS selectors to ensure accurate data extraction.</li>
<li>Designing and managing distributed job queues using technologies such as Redis, Aerospike and Kubernetes to handle large-scale distributed crawling and processing tasks.</li>
<li>Developing strategies to monitor and ensure data quality, accuracy, and integrity throughout the crawling and indexing process.</li>
<li>Continuously improving and optimizing existing web crawling infrastructure to maximize efficiency and adapt to new challenges.</li>
</ul>
<p>About You</p>
<p>Core programming and web technologies</p>
<ul>
<li>Proficiency in Go (Golang)/Rust/Zig for building scalable and efficient web crawlers.</li>
<li>Deep understanding of TCP, UDP, TLS and HTTP/1.1,2,3 protocols and web communication.</li>
<li>Knowledge of HTML, CSS, and JavaScript for parsing and navigating web content.</li>
<li>Familiarity with cloud platforms (AWS, GCP), orchestration (Kubernetes, Nomad), and containerization (Docker) for deployment.</li>
</ul>
<p>Data Structures &amp; Algorithms</p>
<ul>
<li>Mastery of queues, stacks, hash maps, and other data structures for efficient data handling.</li>
<li>Ability to design and optimize algorithms for large-scale web crawling.</li>
</ul>
<p>Web Scraping &amp; Data Acquisition</p>
<ul>
<li>Hands-on experience with networking and web scraping libraries.</li>
<li>Understanding of how search engines work and best practices for web crawling optimization.</li>
</ul>
<p>Databases &amp; Data Storage</p>
<ul>
<li>Experience with SQL and/or NoSQL databases (knowing Aerospike is a bonus) for storing and managing crawled data.</li>
<li>Familiarity with data warehousing and scalable storage solutions.</li>
</ul>
<p>Distributed Systems &amp; Big Data</p>
<ul>
<li>Knowledge of distributed systems (e.g., Hadoop, Spark) for processing large datasets.</li>
</ul>
<p>Bonus Skills (Nice-to-Have)</p>
<ul>
<li>Experience with web archiving projects &amp; tooling, open-source archiving is a big plus!</li>
<li>Experience applying Machine Learning to improve crawling efficiency or accuracy.</li>
<li>Experience with low-level networking programming and/or userspace TCP/IP stacks.</li>
</ul>
<p>Hiring Process</p>
<p>Here is what you should expect:</p>
<ul>
<li>Introduction call - 35 min</li>
<li>Hiring Manager Interview - 30 min</li>
<li>Live-coding Interview - 45 min</li>
<li>System Design Interview - 45 min</li>
<li>Deep dive interview (optional) - 60min</li>
<li>Culture-fit discussion - 30 min</li>
<li>Reference checks</li>
</ul>
<p>Additional Information</p>
<p>Location &amp; Remote</p>
<p>This role is primarily based in one of our European offices , Paris, France and London, UK. We will prioritize candidates who either reside there or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team. In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting , currently France, UK, Germany, Belgium, Netherlands, Spain and Italy. In any case, we ask all new hires to visit our Paris HQ office:</p>
<ul>
<li>for the first week of their onboarding (accommodation and travelling covered)</li>
<li>then at least 2 days per month</li>
</ul>
<p>What we offer</p>
<p>💰 Competitive salary and equity</p>
<p>🧑‍⚕️ Health insurance</p>
<p>🚴 Transportation allowance</p>
<p>🥎 Sport allowance</p>
<p>🥕 Meal vouchers</p>
<p>💰 Private pension plan</p>
<p>🍼 Parental : Generous parental leave policy</p>
<p>🌎 Visa sponsorship</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Rust, Zig, TCP, UDP, TLS, HTTP/1.1, HTTP/2, HTTP/3, HTML, CSS, JavaScript, cloud platforms, orchestration, containerization, queues, stacks, hash maps, SQL, NoSQL databases, data warehousing, scalable storage solutions, distributed systems, Hadoop, Spark, web archiving projects, Machine Learning, low-level networking programming, userspace TCP/IP stacks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and solutions for enterprise use. It has a global presence with teams in multiple countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/c96bf665-7d73-406b-8d8f-ddf8df5d160f</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>651a6835-81f</externalid>
      <Title>Member of Revenue Strategy &amp; Operations, Marketing Analytics</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto. We are looking for a high-impact Marketing Analytics &amp; Operations Manager to architect and scale our marketing data ecosystem.</p>
<p>This role goes beyond reporting,you will be responsible for building the foundation that powers our go-to-market strategy, from attribution modeling to executive decision-making. You will partner closely with marketing, sales, and revenue leadership to connect marketing efforts to business outcomes, enabling smarter investments and accelerating growth in the institutional crypto ecosystem.</p>
<p><strong>Core Competencies</strong></p>
<ul>
<li>Advanced SQL proficiency: Ability to extract, transform, and analyze large datasets across multiple systems to generate insights</li>
<li>Salesforce expertise (2–3+ years): Hands-on experience as a Salesforce admin supporting marketing and GTM teams, including campaign tracking, attribution, and data architecture</li>
<li>Marketing data architecture ownership: Experience integrating and scaling data across tools (e.g., CRM, marketing automation, analytics platforms)</li>
<li>Analytical rigor + creative problem-solving: Strong critical thinker who can challenge assumptions, identify root causes, and design innovative solutions</li>
<li>Cross-functional leadership: Proven ability to collaborate across marketing, sales, RevOps, and product teams to drive aligned outcomes</li>
</ul>
<p><strong>Technical Skills:</strong></p>
<p>Technical &amp; Analytical Ownership</p>
<ul>
<li>Own and evolve marketing data infrastructure, including Salesforce and integrated MarTech systems</li>
<li>Write and optimize SQL queries to analyze campaign performance, pipeline generation, and revenue attribution</li>
<li>Build and maintain scalable dashboards and reporting frameworks across:</li>
<li>Performance marketing</li>
<li>BDR outbound</li>
<li>Field &amp; event marketing</li>
<li>Develop multi-touch attribution models and conversion tracking methodologies</li>
<li>Analyze RoAS, LTV/CAC, and funnel efficiency to inform investment decisions</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<p>Systems &amp; Data Architecture</p>
<ul>
<li>Serve as Salesforce admin for marketing, ensuring clean data structures, campaign tracking, and attribution integrity</li>
<li>Assess and optimize the marketing tech stack, identifying gaps and opportunities for automation and scale</li>
<li>Partner closely with Data teams to ensure data consistency and governance across systems</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Produce executive-ready dashboards, reports, and presentations that clearly communicate marketing performance and ROI</li>
<li>Provide actionable insights to support budget allocation, channel strategy, and campaign prioritization</li>
<li>Conduct deep-dive analyses into client journeys, segmentation, and growth drivers</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Act as a key partner to Marketing, Sales, BDR, and Relationship Management teams</li>
<li>Translate business needs into data requirements and technical solutions</li>
<li>Influence stakeholders through clear communication, strong storytelling, and data-backed recommendations</li>
</ul>
<p><strong>What Sets You Apart:</strong></p>
<ul>
<li>Deep understanding of the institutional crypto landscape and client lifecycle</li>
<li>Experience evaluating and implementing MarTech tools and integrations</li>
<li>Ability to move fluidly between technical execution and strategic thinking</li>
<li>Strong storytelling skills,turning data into clear, compelling narratives for executives</li>
</ul>
<p><strong>You may be a fit for this role if you have:</strong></p>
<ul>
<li>2–3+ years of Salesforce experience supporting marketing or GTM teams (admin-level ownership preferred)</li>
<li>Strong hands-on experience with SQL and data analysis (required)</li>
<li>Experience building marketing dashboards, attribution models, and performance reporting systems</li>
<li>Familiarity with institutional crypto, fintech, or financial services and how GTM operates in these environments</li>
<li>A track record of building or improving data infrastructure and analytics frameworks</li>
<li>A scrappy, ownership-driven mindset,comfortable wearing multiple hats and operating in a fast-paced environment</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>Experience with data warehousing solutions (e.g., Snowflake, BigQuery, Redshift)</li>
<li>Exposure to BI tools (e.g., Looker, Tableau, Hex)</li>
<li>You were emotionally moved by the soundtrack to Hamilton, which chronicles the founding of a new financial system. :)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Advanced SQL proficiency, Salesforce expertise, Marketing data architecture ownership, Analytical rigor + creative problem-solving, Cross-functional leadership, Data warehousing solutions, BI tools, Institutional crypto landscape and client lifecycle, MarTech tools and integrations, Strong storytelling skills</Skills>
      <Category>Marketing</Category>
      <Industry>Finance</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure. It has a Series D valuation over $3 billion.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/0a8757c6-e3e9-42d5-aa15-9b779f2e8c16</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3d849fbc-058</externalid>
      <Title>Member of Product, Data Platform</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.</p>
<p>The Data Platform team is the backbone of Anchorage Digital&#39;s information infrastructure. As data becomes the lifeblood of every product, compliance workflow, and client-facing report we produce, this team is responsible for building and operating a unified, scalable, and reliable data platform that serves the entire organization.</p>
<p>As a Data Platform Product Manager, you will own the strategy and execution for centralizing and formalizing the company&#39;s data infrastructure , spanning internal operational data, transaction and blockchain data, customer data, and external data sources.</p>
<p>Your mission is to transform a fragmented data landscape into a single source of truth that powers mission-critical reporting, business insights, and downstream product experiences across every team at Anchorage.</p>
<p>This is a force-multiplier role. Your work will elevate the quality, speed, and reliability of every product and team at the company.</p>
<p>You will define the standards, build the platform, and create the foundation that enables Anchorage to scale with confidence.</p>
<p>If you thrive at the intersection of complex data systems, cross-functional influence, and platform thinking, this is your opportunity to have outsized impact at a category-defining company in digital assets.</p>
<p>Below, we define our Factors of Growth &amp; Impact to help Anchorage Villagers measure their impact and articulate feedback, coaching, and the rich learning that happens while exploring, developing, and mastering capabilities within and beyond the Member of Product, Data Platform role:</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Own the detailed prioritization of the data platform roadmap, balancing foundational infrastructure work, new capabilities, and technical debt.</li>
<li>Demonstrate deep strategic thinking in shaping the platform roadmap, considering the unique data challenges of digital assets, blockchain protocols, and regulated financial services.</li>
<li>Deliver complex, cross-functional projects with multiple dependencies across engineering, analytics, compliance, and operations teams.</li>
<li>Work closely with engineering and data science counterparts to drive product development processes, sprint planning, and architectural decisions.</li>
<li>Ability to understand and reason about system architecture , including data warehousing, ETL/ELT pipelines, streaming vs. batch processing, and modern data stack components , and communicate clear requirements to engineering.</li>
<li>Drive comprehensive go-to-market strategy for internal platform adoption, including defining success metrics, tracking KPIs around data quality and platform usage, and iterating based on data-driven insights.</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Lead and influence cross-functional teams while maintaining strong stakeholder relationships across the entire organization , from engineering to finance to compliance.</li>
<li>Exercise independent decision-making and take full ownership of data platform strategy and execution.</li>
<li>Contribute strategic insights that significantly impact company direction, operational efficiency, and product quality.</li>
<li>Demonstrate platform leadership that elevates the performance and effectiveness of every team that depends on data.</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Develop deep understanding of Anchorage&#39;s business model, product suite, regulatory environment, and organizational structure.</li>
<li>Build and maintain strong relationships with stakeholders across all departments to ensure the data platform serves the company&#39;s most critical needs.</li>
<li>Navigate and improve organizational data practices to enhance efficiency, compliance, and decision-making.</li>
<li>Drive company objectives through strategic data platform decisions and initiatives.</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Effectively influence and motivate teams across the organization to adopt platform standards and invest in data quality, even when those teams do not report to you.</li>
<li>Enable cross-functional collaboration through clear, consistent communication about platform capabilities, timelines, and data governance expectations.</li>
<li>Act as a thoughtful knowledge partner to senior leadership, translating complex data infrastructure topics into clear business impact.</li>
<li>Proactively communicate platform goals, status updates, and data health metrics throughout the organization.</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5+ years of product management experience, with significant time spent on data platforms, data infrastructure, or data-intensive enterprise products.</li>
<li>Proven experience building or scaling enterprise data platforms , including data warehousing, data lakes, ETL/ELT pipelines, or modern data stack tooling (e.g., Snowflake, Databricks, dbt, Airflow, Spark).</li>
<li>Strong understanding of data modeling, data governance, and data quality frameworks.</li>
<li>Experience working with diverse data types , including transactional data, customer data, financial data, and ideally blockchain or on-chain data.</li>
<li>Track record of driving cross-functional alignment and adoption for internal platform products where you must influence without direct authority.</li>
<li>Exceptional written and verbal communication skills, with the ability to convey complex data architecture concepts to both technical and non-technical audiences.</li>
<li>Your empathy and adaptability not only complement others&#39; working styles but also embody our culture of curiosity, creativity, and shared understanding.</li>
<li>You self describe as some combination of the following: creative, humble, ambitious, detail oriented, hard working, trustworthy, eager to learn, methodical, action oriented, and tenacious.</li>
</ul>
<p><strong>Although not a requirement, bonus points if you have:</strong></p>
<ul>
<li>You have hands-on experience with blockchain data indexing, onchain analytics, or crypto-native data infrastructure.</li>
<li>You have built data platforms that serve both internal analytics consumers and external client-facing products (reports, statements, dashboards).</li>
<li>You have experience supporting clients with data-related issues or concerns.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, data infrastructure, data-intensive enterprise products, data warehousing, data lakes, ETL/ELT pipelines, modern data stack tooling, Snowflake, Databricks, dbt, Airflow, Spark, data modeling, data governance, data quality frameworks, blockchain or on-chain data</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/0e730f61-a2e4-4152-8277-3f6383cc69a6</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>a3e7e545-094</externalid>
      <Title>FBS Data Production Support Analyst (Data Pipelines)</Title>
      <Description><![CDATA[<p><strong>Role Overview</strong></p>
<p>The purpose of this role is to ensure smooth operations of our production data assets. Activities will include monitoring production systems for incident occurrence, alerting applicable parties when incidents arise and incident triaging and management. They will also carry out activities to prevent production incidents.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Work with Data Pipelines, handling incidents and RCA</li>
<li>Administers, analyzes, and prioritizes systems issues and negotiates a course of action for resolution.</li>
<li>Supports workflow and solutions; trouble shoots user errors and supports reporting capabilities.</li>
<li>Utilizes system monitoring utilities to monitor system availability.</li>
<li>Extracts and compiles data system monitoring data to create availability scorecards and reports.</li>
<li>System Monitoring: Continuously monitor IT systems to ensure optimal performance and availability, identifying and addressing potential issues before they escalate.</li>
<li>Monitoring and Maintenance: Regularly monitor production data assets to ensure they are functioning correctly and efficiently.  Alerting applicable parties if an issue arises in production.</li>
<li>Issue Resolution: Work with data team to identify, diagnose, and resolve technical issues related to production data assets. Work with relevant teams to implement effective solutions.</li>
<li>Incident Management: Manage and prioritize incidents, ensuring that they are resolved promptly and efficiently and follow the incident management process. Document incidents and resolutions for future reference.</li>
<li>Incident Management: Respond to and resolve technical issues reported by users or automated monitoring alerts. This includes diagnosing problems, identifying solutions, and implementing fixes.</li>
<li>Problem Analysis: Analyze recurring issues to identify root causes and implement long-term solutions to prevent future occurrences.</li>
<li>Root Cause Analysis: Conduct thorough investigations to determine the underlying causes of recurring incidents and implement preventive measures.</li>
<li>Preventative Measures: Identify incidents that recur and put solutions in place to prevent recurrence.</li>
<li>Data Integrity: Work with data team to ensure the accuracy and integrity of data produced and provided to the business, work with the data teams to implement and maintain quality control measures to prevent errors.</li>
<li>Documentation: Maintain comprehensive documentation of processes, system configurations, and troubleshooting procedures.  Ensure documentation is created and owned be it by the data team or the production support team.</li>
<li>Support: Provide support to data teams, data users and stakeholders. Respond to inquiries and assist with requests as applicable.</li>
<li>Optimization: Identify opportunities to optimize data production processes and implement improvements to enhance efficiency.</li>
<li>Performance Optimization: Analyze system performance and identify areas for improvement. Suggest and implement changes to enhance system efficiency and reliability.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Pipelines, Incident Management, System Monitoring, Data Integrity, Documentation, Problem Analysis, Root Cause Analysis, Preventative Measures, SQL, Python, Java, ETL processes, Database management, Data warehousing</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>One of the United States&apos; largest insurers, providing a wide range of insurance and financial services products with gross written premiums well over US$25 Billion.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/ffvEvDAAYzjgBfJeCMdK9E/remote-fbs-data-production-support-analyst-(data-pipelines)-in-mexico-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>9aaf497d-8cf</externalid>
      <Title>Senior Data Consultant (H/F)</Title>
      <Description><![CDATA[<p>A senior data consultant is needed to join the team at Fifty-Five. The successful candidate will be responsible for promoting data science and data processing to support digital marketing, following a project plan with different milestones, ensuring data quality and accuracy, and delivering high-quality results to clients.</p>
<p>The role will involve working closely with the technical and business teams to deliver marketing-oriented AI cases, integrating business requirements into a relevant activation strategy. The consultant will also be responsible for analysing data for clients, responding to digital activity management issues using key performance indicators, and participating in the development of Fifty-Five&#39;s data science offer.</p>
<p>The ideal candidate will have a degree in engineering or a related field, with at least 2 years of experience in data consulting. They will have strong knowledge of Microsoft Office, a strong analytical mindset, excellent communication skills, and a commercial spirit.</p>
<p>The company offers a range of benefits, including a competitive salary, a 10-euro daily meal ticket, 50% transport costs, a flexible work-life balance, and a modern and stimulating work environment.</p>
<p>Fifty-Five is committed to diversity and inclusion, and welcomes applications from all candidates, regardless of their background, age, gender, or disability.</p>
<p>The company is part of The Brandtech Group, a global data company that helps brands collect, analyse and activate their data across paid, earned and owned channels to increase their marketing ROI and improve customer acquisition and retention.</p>
<p>The role is based in Paris, with the opportunity to work on a global scale. The company is looking for a senior data consultant to join its team, with a competitive salary and a range of benefits.</p>
<p>If you are a senior data consultant looking for a new challenge, please apply now.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Promote data science and data processing to support digital marketing</li>
<li>Follow a project plan with different milestones</li>
<li>Ensure data quality and accuracy</li>
<li>Deliver high-quality results to clients</li>
<li>Work closely with technical and business teams to deliver marketing-oriented AI cases</li>
<li>Integrate business requirements into a relevant activation strategy</li>
<li>Analyse data for clients</li>
<li>Respond to digital activity management issues using key performance indicators</li>
<li>Participate in the development of Fifty-Five&#39;s data science offer</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Degree in engineering or a related field</li>
<li>At least 2 years of experience in data consulting</li>
<li>Strong knowledge of Microsoft Office</li>
<li>Strong analytical mindset</li>
<li>Excellent communication skills</li>
<li>Commercial spirit</li>
</ul>
<p><strong>Preferred skills:</strong></p>
<ul>
<li>Strong knowledge of Python</li>
<li>Experience with data visualisation tools</li>
<li>Knowledge of machine learning algorithms</li>
<li>Experience with data warehousing and business intelligence tools</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive salary</li>
<li>10-euro daily meal ticket</li>
<li>50% transport costs</li>
<li>Flexible work-life balance</li>
<li>Modern and stimulating work environment</li>
</ul>
<p><strong>Company culture:</strong></p>
<ul>
<li>Fifty-Five is committed to diversity and inclusion</li>
<li>Welcomes applications from all candidates, regardless of their background, age, gender, or disability</li>
<li>Part of The Brandtech Group, a global data company that helps brands collect, analyse and activate their data across paid, earned and owned channels to increase their marketing ROI and improve customer acquisition and retention</li>
</ul>
<p><strong>Location:</strong></p>
<ul>
<li>Paris, France</li>
</ul>
<p><strong>Type of contract:</strong></p>
<ul>
<li>Full-time</li>
</ul>
<p><strong>Salary:</strong></p>
<ul>
<li>Competitive salary</li>
</ul>
<p><strong>Workplace type:</strong></p>
<ul>
<li>On-site</li>
</ul>
<p><strong>Category:</strong></p>
<ul>
<li>Data science</li>
</ul>
<p><strong>Industry:</strong></p>
<ul>
<li>Technology</li>
</ul>
<p><strong>Salary range:</strong></p>
<ul>
<li>Competitive salary</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary</Salaryrange>
      <Skills>Microsoft Office, Data science, Data processing, Digital marketing, Machine learning, Data visualisation, Data warehousing, Business intelligence, Python, Data visualisation tools, Machine learning algorithms, Data warehousing and business intelligence tools</Skills>
      <Category>Data science</Category>
      <Industry>Technology</Industry>
      <Employername>Fifty-Five</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fifty-Five is a global data company that helps brands collect, analyse and activate their data across paid, earned and owned channels to increase their marketing ROI and improve customer acquisition and retention. The company has over 320 employees worldwide.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/ap7XkdJEREJccZrXVta5Fv/senior-data-consultant-(h%2Ff)-in-paris-at-fifty-five</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>aafa7b92-fa6</externalid>
      <Title>Senior Consultant - Data Engineering &amp; Data Science (m/w/d)</Title>
      <Description><![CDATA[<p>Are you looking to advance your career and work with experienced, talented colleagues to successfully solve the most important challenges of our clients? We are growing further and looking for enthusiastic individuals to strengthen our team. You will be part of a dynamic, strongly growing company with over 300,000 employees.</p>
<p>Our dynamic organisation allows you to work across topics and bring in your ideas, experiences, creativity, and goal orientation. Are you ready?</p>
<p>As a Consultant/Senior Consultant in the Data Engineering &amp; Data Science field, you will work hands-on on the conception, development, and implementation of modern data and analytics solutions. You will support the entire project lifecycle - from data intake and transformation to analytics and machine learning to productive operation.</p>
<p>You will work closely with data engineers, architects, data scientists, and subject matter experts to implement scalable, reliable, and value-adding solutions in complex customer environments.</p>
<p><strong>Your Tasks</strong></p>
<ul>
<li>Apply data science methods (machine learning, deep learning, GenAI) to solve concrete business questions</li>
<li>Work with structured and semi-structured data in data lakes, lakehouses, and data warehouses</li>
<li>Set up data pipelines for analytical workloads</li>
<li>Support the productive implementation of data and ML solutions, including monitoring and optimisation</li>
</ul>
<p><strong>What You Bring - Required</strong></p>
<ul>
<li>At least 3 years of relevant professional experience in the field of data engineering, data science, or analytics</li>
<li>Hands-on experience in implementing data and analytics solutions in (customer) projects</li>
<li>Strong problem-solving skills and a pragmatic, implementation-oriented way of working</li>
</ul>
<p><strong>Data Engineering Fundamentals</strong></p>
<ul>
<li>Experience in setting up data pipelines (ingestion, transformation, storage)</li>
<li>Solid understanding of data modeling, data transformations, and feature engineering</li>
<li>Experience with cloud-based data platforms, such as:</li>
</ul>
<ol>
<li>Azure, AWS, or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse/Microsoft Fabric</li>
</ol>
<ul>
<li>Knowledge of CI/CD concepts and production-ready deployments</li>
</ul>
<p><strong>Applied Data Science &amp; Analytics</strong></p>
<ul>
<li>Experience in applying GenAI, deep learning, and machine learning procedures as well as statistical analyses</li>
<li>Very good programming skills in Python</li>
<li>Very good SQL skills and experience with relational databases</li>
<li>Experience in deploying and productively using ML models</li>
<li>Ability to translate analytical results into business-relevant insights</li>
<li>Bachelor&#39;s or master&#39;s degree in computer science, engineering, mathematics, or a related field, or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with:</li>
</ul>
<ol>
<li>Streaming technologies (e.g. Kafka, Azure Event Hubs)</li>
<li>Time series analysis, NLP applications, or system modeling</li>
<li>NoSQL databases (e.g. MongoDB, Cosmos DB)</li>
<li>Docker and Kubernetes</li>
<li>Data visualization tools like Power BI, Tableau</li>
<li>Cloud or architecture certifications</li>
</ol>
<p><strong>Language &amp; Mobility (Germany)</strong></p>
<ul>
<li>Fluent German skills (at least C1) for customer communication in the German-speaking market</li>
<li>Very good English skills</li>
<li>Project-related travel readiness</li>
</ul>
<p><strong>Your Team</strong></p>
<p>You will become part of our growing Data &amp; Analytics teams. In this area, you will work with modern technologies in modern data ecosystems. You have the opportunity to turn your own ideas into results - in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, and Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>You will become an employee of a globally renowned management consulting firm at the forefront of technological innovation and industrial transformation. We work across industries with leading companies. Our culture is inclusive and entrepreneurial. As a mid-sized consulting firm embedded in the size of Infosys, we can support our customers worldwide and throughout the entire transformation process in a partnership-like manner.</p>
<p>Our values IC-LIFE - Inclusion, Equity &amp; Diversity, Client, Leadership, Integrity, Fairness, and Excellence - form our compass of values. Further information can be found on our career website.</p>
<p>In Europe, we are awarded by the Financial Times and Forbes as one of the leading consulting firms. Infosys is ranked among the top employers in Germany 2023 and has been certified by the Top Employers Institute for outstanding working conditions in Europe for five consecutive years.</p>
<p>We offer a market-leading salary, attractive additional benefits, and excellent opportunities for further education and development. Have you become curious? Then we look forward to your application - apply now!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Science, Machine Learning, Deep Learning, GenAI, Data Engineering, Data Warehousing, Data Lakes, Lakehouses, Data Pipelines, Cloud-based Data Platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse, Microsoft Fabric, CI/CD, Python, SQL, Relational Databases, Streaming Technologies, Time Series Analysis, NLP Applications, System Modeling, NoSQL Databases, Docker, Kubernetes, Data Visualization Tools, Cloud Certifications, Architecture Certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting is a globally renowned management consulting firm that works with a market-leading brand in every sector, while its parent organization Infosys is a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/ecAfMkjFkA97qaoimVMGNF/hybrid-(senior)-consultant---data-engineering-%26-data-science-(m%2Fw%2Fd)--deutschlandweit-in-munich-at-infosys-consulting---europe</Applyto>
      <Location>Munich, Bavaria, Germany</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>89cf8655-bba</externalid>
      <Title>Sr Business Solutions Analyst - Data Product Owner</Title>
      <Description><![CDATA[<p>Develop complex value-based analytics solutions for quality, business process functionality, or other strategic/data/analytic initiatives that are complex in nature.</p>
<p>Collaborate with cross-functional teams to identify and establish analytics requirements.</p>
<p>Create detailed documentation and business plans that address stakeholder needs.</p>
<p>Liaise between IT and Product Management for data solutions.</p>
<p>Trusted business partner working with business leaders and cross-functional teams to identify key challenges and work on resolutions with the goal of improving organisational performance.</p>
<p>Use advanced understanding of data elements, structures, standards typically involved in financial services transactions, reporting, and cloud-based data storage/sharing to adopt and apply standards in a way that satisfies and balances regulatory and compliance requirements with business needs.</p>
<p>Develop and implement complex enterprise-wide data integration strategies that utilise various data-marts and data warehouse applications to enhance reporting.</p>
<p>Continuously work to improve data platforms, dashboards, and reports to meet emerging and changing business demands and guidelines.</p>
<p>Consult with minimal guidance from Principal on UAT, technical specifications, strategic planning, contract design, technical support, testing updates, and trainings.</p>
<p>Manage data from multiple sources and use project management skills to complete complex projects with minimal coaching and guidance.</p>
<p>Transmit data and proactively work to ensure data quality.</p>
<p>Troubleshoot data issues and devise creative and effective ways to avoid or mitigate issues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Warehousing, Data Architecture &amp; Design, Auto &amp; Home Insurance, SQL, Snowflake, Informatica, Power BI</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/n5t7HfGHJjo9guNgu39F28/hybrid-sr-business-solutions-analyst---data-product-owner-in-pune-at-capgemini</Applyto>
      <Location>Pune, Maharashtra, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>46b08bd3-3e9</externalid>
      <Title>Director, Workplace Data &amp; Insights Lead</Title>
      <Description><![CDATA[<p>About this role</p>
<p>If you&#39;re passionate about data and want to influence how one of the world&#39;s leading asset management firms operates, this is your opportunity. At BlackRock, we believe the workplace is a strategic asset—not just a location—and that data and insights should illuminate possibilities, guiding continuous evolution and amplifying BlackRock&#39;s culture and impact. In this role, you&#39;ll turn insights into action, shaping how employees experience work, and how BlackRock delivers for clients.</p>
<p>What&#39;s In It for You</p>
<ul>
<li>Influence at Scale: Your insights will build how BlackRock Enterprise Services&#39; operates globally—impacting nearly 30,000 employees and the way we deliver for clients.</li>
<li>Executive Visibility: Work in close partnership with senior leaders to influence strategic decisions and Enterprise Services&#39; capital investment priorities.</li>
<li>Innovation Opportunities: Drive initiatives like smart building analytics, AI-assisted insights, and frictionless experiences across office environments and digital platforms.</li>
<li>Career Growth: Accelerate your career by crafting a data strategy that influences how BlackRock&#39;s workplaces operate globally—while building an outstanding insights function.</li>
<li>Purposeful Work: Help build workplaces that cultivate collaboration, well-being, and productivity—making a tangible difference in how people work every day.</li>
</ul>
<p>About BlackRock</p>
<p>At BlackRock, our smartest investment is you. We help millions of people experience financial well-being—serving clients from global institutions to families and individuals. Our promise is simple: deliver clarity, trusted advice, and solutions that secure a better financial future.</p>
<p>About Enterprise Services</p>
<p>Enterprise Services provides strategic workplace solutions and robust infrastructure to deliver a consistent employee experience globally. Our team spans real estate strategy, workplace operations and facilities management, risk and resilience, corporate security and executive operations, workplace experience and insights.</p>
<p>What You&#39;ll Do</p>
<ul>
<li>Set the Strategy: Define the workplace data and analytics vision, aligning to firm priorities and translating insights into measurable business value (cost, risk, experience, sustainability).</li>
<li>Shape and Grow the Function: Lead a hybrid team (FTE and CWKs) including a Reporting Lead and three Data Scientists. Develop the team&#39;s existing foundation to improve capabilities and evolve Workplace Data &amp; Insights into a strategic, high-impact function.</li>
<li>Unify Data: Integrate sources like occupancy sensors, IWMS, badge systems, travel platforms, HRIS, and service tools into a trusted data ecosystem.</li>
<li>Deliver Insights: Build executive dashboards and scorecards; turn complex data into clear narratives that drive action.</li>
<li>Enable Decisions: Collaborate with Workplace Planning, Operations and Experience teams to guide space redesigns, capital allocation, travel policies, and service improvements.</li>
<li>Drive Governance: Establish oversight, protocols, and controls for data quality, privacy, and responsible AI use. Operate within the firm&#39;s AI governance structure.</li>
<li>Innovate: Advance automation, real-time analytics, and smart building use cases; pilot AI-assisted insights.</li>
<li>Influence &amp; Advise: Provide thought leadership and advice to Enterprise Services&#39; leaders, crafting strategic opportunities and supporting key decisions.</li>
</ul>
<p>What You Bring</p>
<ul>
<li>Experience: 10+ years in workplace analytics, corporate real estate, enterprise reporting, or similar domains.</li>
<li>Technical Skills: Familiarity with IWMS, IoT/sensors, data warehousing, and BI tools (Power BI, Tableau, SQL).</li>
<li>Leadership: Ability to influence senior collaborators, tell compelling data stories, and lead cross-functional programs.</li>
<li>Governance Expertise: Demonstrated background in data stewardship, standards, and adherence.</li>
<li>Business Impact Orientation: Track record of linking insights to outcomes and measuring results.</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>IWMS, IoT/sensors, data warehousing, BI tools (Power BI, Tableau, SQL), leadership, governance expertise, business impact orientation</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global asset management firm that helps millions of people experience financial well-being by serving clients from global institutions to families and individuals.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/tpdKcUvbVMHRN1Lt8G2s8W/director%2C-workplace-data-%26amp%3B-insights-lead-in-london-at-blackrock</Applyto>
      <Location>London, Edinburgh</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>93e8d013-2c1</externalid>
      <Title>Sr Business Solutions Analyst - Data Product Owner</Title>
      <Description><![CDATA[<p>Develop complex value-based analytics solutions for quality, business process functionality, or other strategic/data/analytic initiatives. Collaborate with cross-functional teams to identify and establish analytics requirements. Create detailed documentation and business plans that address stakeholder needs. Liaise between IT and Product Management for data solutions. Work with business leaders and cross-functional teams to identify key challenges and work on resolutions to improve organisational performance. Use advanced understanding of data elements, structures, standards typically involved in financial services transactions, reporting, and cloud-based data storage/sharing to adopt and apply standards in a way that satisfies and balances regulatory and compliance requirements with business needs. Develop and implement complex enterprise-wide data integration strategies that utilise various data-marts and data warehouse applications to enhance reporting. Continuously work to improve data platforms, dashboards, and reports to meet emerging and changing business demands and guidelines. Consult with minimal guidance on UAT, technical specifications, strategic planning, contract design, technical support, testing updates, and trainings. Manage data from multiple sources and use project management skills to complete complex projects with minimal coaching and guidance. Transmit data and proactively work to ensure data quality. Troubleshoot data issues and devise creative and effective ways to avoid or mitigate issues.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing complex value-based analytics solutions</li>
<li>Collaborating with cross-functional teams to identify and establish analytics requirements</li>
<li>Creating detailed documentation and business plans</li>
<li>Liaising between IT and Product Management for data solutions</li>
<li>Working with business leaders and cross-functional teams to identify key challenges and work on resolutions to improve organisational performance</li>
<li>Developing and implementing complex enterprise-wide data integration strategies</li>
<li>Continuously working to improve data platforms, dashboards, and reports</li>
<li>Consulting with minimal guidance on various technical and business aspects</li>
<li>Managing data from multiple sources and using project management skills to complete complex projects</li>
<li>Troubleshooting data issues and devising creative and effective ways to avoid or mitigate issues</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Warehousing, Data Architecture &amp; Design, Auto &amp; Home Insurance, SQL, Snowflake, Informatica, Power BI</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/iUXtT6bRSaetL2aU9Hq8RU/hybrid-sr-business-solutions-analyst---data-product-owner-in-hyderabad-at-capgemini</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>11a36eab-3cb</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>Are you ready to contribute to the evolution of our data pipelines for our B2C division? At Future, we are transforming our data-driven decision-making processes and we are looking for a passionate and experienced Data Engineer to join us.</p>
<p>This is an exciting opportunity for someone who excels in a creative environment, enjoys solving complex data challenges, and is eager to build impactful business insights, for this role you will directly report into the Head of Data Engineering</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop and maintain new/current features of the data platform.</li>
<li>Responsible for delivery of development projects, including scoping, writing and sizing of stories involved.</li>
<li>Take ownership of BAU processes, develop area specific domain mastery, and seek means to automate them or reduce their impact.</li>
<li>Proposes and advocates for changes to reduce risk, cost and overhead.</li>
<li>Provide appropriate documentation for pipelines developed</li>
<li>Parameterise pipelines so configuration can be changed easily without having to perform deep changes to the codebase</li>
<li>Apply appropriate testing principles to ensure code is fit for purpose</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure</li>
<li>SQL development skills</li>
<li>Experience using Dataform or dbt</li>
<li>Demonstrated strength in data modelling, ETL development, and data warehousing</li>
<li>Knowledge of data management fundamentals and data storage principles</li>
<li>Familiarity with statistical models or data mining algorithms and practical experience applying these to business problems</li>
</ul>
<p><strong>What&#39;s in it for you</strong></p>
<p>The expected range for this role is £50,000 - £60,000</p>
<p>This is a Hybrid role from our Bath Office, working three days from the office, two from home … Plus more great perks, which include;</p>
<ul>
<li>Uncapped leave, because we trust you to manage your workload and time</li>
<li>When we hit our targets, enjoy a share of our profits with a bonus</li>
<li>Refer a friend and get rewarded when they join Future</li>
<li>Wellbeing support with access to our Colleague Assistant Programmes</li>
<li>Opportunity to purchase shares in Future, with our Share Incentive Plan</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£50,000 - £60,000</Salaryrange>
      <Skills>Python, Google Cloud Platform, BigQuery, DataFlow, Apache Beam, Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure, SQL, Dataform, dbt, data modelling, ETL development, data warehousing, data management fundamentals, data storage principles, statistical models, data mining algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Future</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Future is a global leader in specialist media, with over 3,000 employees working across 200+ media brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/3535C2B9B5</Applyto>
      <Location>Bath</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ee8e022d-8b0</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you&#39;ll play a crucial role helping to lead the technical aspects of Horizon, our data warehouse product. Use your .Net, SQL, data handling, automation, and cloud infrastructure experience, to build and optimize scalable .NET applications. Assist in cultivating a culture of continuous improvement, to enhance product quality and team performance and deliver impactful solutions.</p>
<p><strong>What you&#39;ll be doing</strong></p>
<ul>
<li>Lead the development of robust, high-quality, and scalable .Net applications, prioritizing automation, and streamlining processes to enhance efficiency and reduce manual efforts</li>
<li>Work with the team to maintain high code standards and share responsibility for product quality</li>
<li>Diagnose and resolve issues, communicating their impact to stakeholders and helping to prioritize solutions</li>
<li>Work with relevant stakeholders to encourage healthy team collaboration, and communication processes such as code reviews, test shares, and design discussions etc...</li>
<li>Maintain best practice code quality, design and architecture</li>
<li>Work constructively to assist team members, support learning and growth, and lead by good example</li>
<li>There is scope to lead and support a cross-functional team of Software Engineers, Testers, and Business Analysts (for those interested in incorporating a Team Lead capacity into the role)</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>A seasoned, senior level, backend Software Engineer</li>
<li>Advanced experience in .Net and relational databases, with a well-developed understanding of modern software delivery practices and life cycle</li>
<li>Experience deploying applications in Kubernetes and a strong grasp of observability and distributed logging in cloud-native environments</li>
<li>Proven experience with DevOps pipelines and managing cloud infrastructure</li>
<li>Experience with big data queries, data warehousing, and data-heavy domains would be highly beneficial</li>
<li>Previous team leadership responsibilities would be preferred (for those interested in incorporating a Team Lead capacity into the role)</li>
</ul>
<p>For this particular role, we are currently only considering applicants with the right to live and work in New Zealand without the need for employer sponsorship.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Excellent work/life balance including a 4 ½ day working week</li>
<li>Hybrid working (home and office-based split)</li>
<li>Medical and Life insurance (after qualifying period)</li>
<li>Volunteer day, enhanced paid parental leave and wellness benefits</li>
<li>Strong mentoring &amp; career development focus</li>
<li>Fun team events including the Vista Innovation Cup</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>.Net, SQL, data handling, automation, cloud infrastructure, Kubernetes, observability, distributed logging, DevOps pipelines, cloud infrastructure, big data queries, data warehousing, data-heavy domains</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Vista Group</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Vista Group makes software for the cinema industry and serves cinemas, film distributors, and moviegoers worldwide.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/E2296A9620</Applyto>
      <Location>Auckland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>6d5e164b-74d</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p><strong>Data Engineer</strong></p>
<p>Are you ready to contribute to the evolution of our data pipelines for our B2C division? We are transforming our data-driven decision-making processes and we are looking for a passionate and experienced Data Engineer to join us. This is an exciting opportunity for someone who grows in a creative environment, enjoys solving complex data challenges. You&#39;ll report into the Lead Data Engineer for this position and sit within the wider Data Engineer team.</p>
<p>The Data &amp; Business Intelligence team guides our organisation to become more data-driven. Our to market changes gives us a competitive edge. By ensuring visibility of objective performance data, we empower our teams to make rapid, informed decisions that enhance overall performance.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Maintain new/current features of the data platform.</li>
<li>Responsible for delivery of development projects.</li>
<li>Utilise established software engineering practices and principles.</li>
<li>Take ownership of BAU processes, develop area specific domain mastery.</li>
<li>Ensure compliance matters are followed.</li>
<li>Utilise CI/CD and infrastructure as code (Terraform) for rapid deployment of changes.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure.</li>
<li>SQL development skills.</li>
<li>Demonstrated strength in data modelling, ETL development, and data warehousing.</li>
<li>Knowledge of data management fundamentals and data storage principles.</li>
<li>Familiarity with statistical models or data mining algorithms and practical experience applying these to business problems.</li>
</ul>
<p><strong>What&#39;s in it for you</strong></p>
<p>The expected range for this role is £45,000 - £50,000. This is a Hybrid role from our Bath Office, working three days from the office, two from home. Plus more great perks, which include:</p>
<ul>
<li>Uncapped leave, because we trust you to manage your workload and time.</li>
<li>When we hit our targets, enjoy a share of our profits with a bonus.</li>
<li>Refer a friend and get rewarded when they join Future.</li>
<li>Wellbeing support with access to our Colleague Assistant Programmes.</li>
<li>Opportunity to purchase shares in Future, with our Share Incentive Plan.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£45,000 - £50,000</Salaryrange>
      <Skills>Python, Google Cloud Platform, BigQuery, DataFlow, Apache Beam, Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure, SQL, data modelling, ETL development, data warehousing, data management fundamentals, data storage principles, statistical models, data mining algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Future</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Future is a global leader in specialist media, with over 3,000 employees working across 200+ media brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/BDB1B6F4CF</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ecdc5591-27d</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Data Engineer to join our team. As a Data Engineer, you will play a key role in the development and maintenance of our data infrastructure, ensuring that our data is accurate, reliable, and secure.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, develop, and maintain data pipelines and architectures to support our data-driven decision-making processes</li>
<li>Collaborate with our data scientists and analysts to understand their data requirements and develop solutions to meet those needs</li>
<li>Work closely with our IT team to ensure that our data systems are integrated with our existing infrastructure</li>
<li>Develop and maintain data quality and governance processes to ensure that our data is accurate and reliable</li>
<li>Participate in the development and maintenance of our data architecture roadmap</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Mathematics, or a related field</li>
<li>2+ years of experience in data engineering or a related field</li>
<li>Strong understanding of data engineering principles and practices</li>
<li>Experience with data warehousing and business intelligence tools</li>
<li>Strong programming skills in languages such as Python, Java, or C++</li>
<li>Experience with cloud-based data platforms such as AWS or GCP</li>
<li>Strong communication and collaboration skills</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunity to work with a leading Formula One racing team</li>
<li>Collaborative and dynamic work environment</li>
<li>Professional development and growth opportunities</li>
<li>Access to state-of-the-art technology and tools</li>
<li>Flexible working hours and remote work options</li>
</ul>
<p>Note: The salary range for this position is competitive and will be discussed during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive and will be discussed during the interview process</Salaryrange>
      <Skills>data engineering, data warehousing, business intelligence, Python, Java, C++, AWS, GCP, cloud computing, data architecture, data governance</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>Williams Racing</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.williamsf1.com.png</Employerlogo>
      <Employerdescription>Williams Racing is a British Formula One racing team that has been in operation since 1977. The team is based in Grove, Oxfordshire.</Employerdescription>
      <Employerwebsite>https://careers.williamsf1.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.williamsf1.com/job/trackside-operations-lead-hospitality-in-london-jid-487</Applyto>
      <Location>Grove</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>bb7bb8e9-e31</externalid>
      <Title>Data Engineer - 12 Month TFT</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced Data Engineer to join our team at Electronic Arts. As a Data Engineer, you will collaborate with the Marketing team to implement data strategies and develop complex ETL pipelines that support dashboards for promoting deeper understanding of our business.</p>
<p>You will have experience developing and establishing scalable, efficient, automated processes for large-scale data analyses. You will also stay informed of the latest trends and research on all aspects of data engineering and analytics.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, implement and maintain efficient, scalable and robust data pipelines using cloud-native and open-source technologies</li>
<li>Develop and optimize ETL/ELT processes to ingest, transform, and deliver data from diverse sources</li>
<li>Automate deployment and monitoring of data workflows using CI/CD best practices</li>
<li>Guide communications between our users and studio engineers to provide scalable end-to-end solutions</li>
<li>Promote strategies to improve our data modelling, quality and architecture</li>
<li>Participate in code reviews, mentor junior engineers, and contribute to team knowledge sharing</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4+ years relevant industry experience in a data engineering role and graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field</li>
<li>Proficiency in writing SQL queries and knowledge of cloud-based databases like Snowflake, Redshift, BigQuery or other big data solutions</li>
<li>Experience in data modelling and tools such as dbt, ETL processes, and data warehousing</li>
<li>Experience with at least one of the programming languages like Python, Java</li>
<li>Experience with version control and code review tools such as Git</li>
<li>Knowledge of latest data pipeline orchestration tools such as Airflow</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code tools (e.g., Docker, Terraform, CloudFormation)</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience in gaming and working with its telemetry data or data from similar sources</li>
<li>Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark, Iceberg</li>
<li>Experience in developing engineering solutions based on near real-time/streaming dataset</li>
<li>Exposure to AI/ML, MLOps concepts and collaboration with data science or AI teams.</li>
</ul>
<p>Pay Transparency - North America</p>
<p>The ranges listed below are what EA in good faith expects to pay applicants for this role in these locations at the time of this posting. If you reside in a different location, a recruiter will advise on the applicable range and benefits. Pay offered will be determined based on a number of relevant business and candidate factors (e.g. education, qualifications, certifications, experience, skills, geographic location, or business needs).</p>
<p>Pay Ranges: $100,000 - $139,500 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>temporary</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$100,000 - $139,500 CAD</Salaryrange>
      <Skills>SQL, cloud-based databases, data modelling, ETL processes, data warehousing, Python, Java, Git, Airflow, cloud platforms, infrastructure-as-code tools, gaming telemetry data, big data platforms, EMR, Databricks, Kafka, Spark, Iceberg, AI/ML, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a portfolio of popular games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-12-month-TFT/212451</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>0841fcf4-9ab</externalid>
      <Title>Data Engineer SE - II</Title>
      <Description><![CDATA[<p>We are on a mission to rid the world of bad customer service by “mobilizing” the way help is delivered. Today’s consumers want an always-available customer service experience that leaves them feeling valued and respected.</p>
<p>Helpshift helps B2B brands deliver this modern customer service experience through a mobile-first approach. We have changed how conversations take place, moving the conversation away from a slow, outdated email and desktop experience to an in-app chat experience that allows users to interact with brands in their own time.</p>
<p>Through our market-leading AI-powered chatbots and automation, we help brands deliver instant and rapid resolutions. Because agents play a key role in delivering help, our platform gives agents superpowers with automation and AI that simply works.</p>
<p><strong>About the Team</strong></p>
<p>Consumers care first and foremost about having their time valued by brands. Brands need insights into their customer service operation to serve their consumers effectively. Such insights and analytics are delivered through various data products like in-app analytics dashboards and data-sharing integrations.</p>
<p>The data platform team is responsible for designing, building, and maintaining the data infrastructure that enables such data and analytics products at scale. We build and manage data pipelines, databases, and other data structures to ensure that the data is reliable, accurate, and easily accessible.</p>
<p>We also enable internal stakeholders with business intelligence and machine learning teams with data ops. This team manages the platform that handles 2 Million events per minute and processes 1+ terabytes of data daily.</p>
<p><strong>About the Role</strong></p>
<ul>
<li>Building maintainable data pipelines both for data ingestion and operational analytics for data collected from 2 billion devices and 900M Monthly active users</li>
<li>Building customer-facing analytics products that deliver actionable insights and data, easily detect anomalies</li>
<li>Collaborating with data stakeholders to see what their data needs are and being a part of the analysis process</li>
<li>Write design specifications, test, deployment, and scaling plans for the data pipelines</li>
<li>Mentor people in the team &amp; organization</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of experience in building and running data pipelines that scale for TBs of data</li>
<li>Proficiency in high-level object-oriented programming language (Python or Java) is must</li>
<li>Experience in Cloud data platforms like Snowflake and AWS, EMR/Athena is a must</li>
<li>Experience in building modern data lakehouse architectures using Snowflake and columnar formats like Apache Iceberg/Hudi, Parquet, etc</li>
<li>Proficiency in Data modeling, SQL query profiling, and data warehousing skills is a must</li>
<li>Experience in distributed data processing engines like Apache Spark, Apache Flink, Datalfow/Apache Beam, etc</li>
<li>Knowledge of workflow orchestrators like Airflow, Dasgter, etc is a plus</li>
<li>Data visualization skills are a plus (PowerBI, Metabase, Tableau, Hex, Sigma, etc)</li>
<li>Excellent verbal and written communication skills</li>
<li>Bachelor’s Degree in Computer Science (or equivalent)</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Hybrid setup</li>
<li>Worker&#39;s insurance</li>
<li>Paid Time Offs</li>
<li>Other employee benefits to be discussed by our Talent Acquisition team in India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Snowflake, AWS, EMR/Athena, Apache Iceberg/Hudi, Parquet, Apache Spark, Apache Flink, Datalflow/Apache Beam, Airflow, Data modeling, SQL query profiling, data warehousing, PowerBI, Metabase, Tableau, Hex, Sigma</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Helpshift</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Helpshift is a company that provides a mobile-first customer service experience for B2B brands. It has over 900 million active monthly consumers and is used by hundreds of leading brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/D451DB2325</Applyto>
      <Location>Pune, Maharashtra, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>6ea8846c-bf3</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p><strong>Data Engineer</strong></p>
<p>We are seeking a highly skilled Data Engineer to join our Data and Analytics team. As a Data Engineer, you will play a key role in the development and maintenance of our data infrastructure, ensuring that our data is accurate, reliable, and easily accessible to our teams.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, develop, and maintain data pipelines and architectures to support our data-driven decision-making processes</li>
<li>Collaborate with cross-functional teams to identify data requirements and develop solutions to meet those needs</li>
<li>Work closely with our data scientists to ensure that our data is accurate, complete, and easily accessible</li>
<li>Develop and maintain data visualizations and reports to support our business needs</li>
<li>Troubleshoot data-related issues and implement solutions to prevent future occurrences</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Mathematics, or a related field</li>
<li>2+ years of experience in data engineering or a related field</li>
<li>Strong understanding of data structures, algorithms, and software design patterns</li>
<li>Experience with data warehousing and business intelligence tools</li>
<li>Strong programming skills in languages such as Python, Java, or C++</li>
<li>Experience with cloud-based data platforms such as AWS or GCP</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunity to work with a leading Formula One team</li>
<li>Collaborative and dynamic work environment</li>
<li>Professional development opportunities</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>A competitive salary and benefits package</li>
<li>The opportunity to work with a leading Formula One team</li>
<li>A collaborative and dynamic work environment</li>
<li>Professional development opportunities</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you are a motivated and experienced Data Engineer looking for a new challenge, please submit your application, including your CV and a cover letter, to [insert contact email or link to application portal].</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data pipelines, data architecture, data visualization, data warehousing, business intelligence, Python, Java, C++, AWS, GCP, cloud computing, big data, machine learning</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>Williams Racing</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.williamsf1.com.png</Employerlogo>
      <Employerdescription>Williams Racing is a British Formula One racing team that designs, manufactures, and races Formula One cars. The team is based in Grove, Oxfordshire.</Employerdescription>
      <Employerwebsite>https://careers.williamsf1.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.williamsf1.com/job/executive-office-coordinator-in-grove-wantage-jid-491</Applyto>
      <Location>Grove</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>1ace7478-7a2</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Data Infrastructure designs, operates, and scales secure, privacy-respecting systems that power data-driven decisions across Anthropic. Our mission is to provide data processing, storage, and access that are trusted, fast, and easy to use.</p>
<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability. This role offers the opportunity to work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p><strong>Responsibilities:</strong></p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li><strong>Data Governance &amp; Access Control:</strong> Design and implement robust access control systems ensuring only authorized users can access sensitive data. Build infrastructure for permission management, audit logging, and compliance requirements. Work on IAM policies, ACLs, and security controls that scale across thousands of users and systems.</li>
</ul>
<ul>
<li><strong>Financial Data Infrastructure:</strong> Build and maintain data pipelines and warehouses powering business-critical reporting. Ensure data integrity, accuracy, and availability for complex financial systems, including third party revenue ingestion pipelines; manage the external relationships as needed to drive upstream dependencies. Own the reliability of systems processing revenue, usage, and business metrics.</li>
</ul>
<ul>
<li><strong>Cloud Storage &amp; Reliability:</strong> Architect disaster recovery, backup, and replication systems for petabyte-scale data. Ensure high availability and durability of data stored in cloud object storage (GCS, S3). Build systems that protect against data loss and enable rapid recovery.</li>
</ul>
<ul>
<li><strong>Data Platform &amp; Tooling:</strong> Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark. Optimize query performance, manage costs, and enable self-service analytics across the organization.</li>
</ul>
<p><strong>You might be a good fit if you:</strong></p>
<ul>
<li>Have 10+ years (not including internships or co-ops) of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems</li>
</ul>
<ul>
<li>Have 3+ years (not including internships or co-ops) of experience leading large scale, complex projects or teams as an engineer or tech lead</li>
</ul>
<ul>
<li>Can set technical direction for a team, not just execute within it</li>
</ul>
<ul>
<li>Have deep experience with at least one of:</li>
</ul>
<ul>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar</li>
</ul>
<ul>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS)</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure</li>
</ul>
<ul>
<li>Experience with Kubernetes, containerization, and cloud-native architectures</li>
</ul>
<ul>
<li>Track record of improving data reliability, availability, or cost efficiency at scale</li>
</ul>
<ul>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks</li>
</ul>
<ul>
<li>Experience working in fintech, financial services, or highly regulated environments</li>
</ul>
<ul>
<li>Security engineering background with focus on data protection and access controls</li>
</ul>
<p><strong>Technologies We Use:</strong></p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran</li>
</ul>
<ul>
<li>Storage: GCS, S3</li>
</ul>
<ul>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS</li>
</ul>
<ul>
<li>Languages: Python, Go, SQL</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>
<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls, data governance, access control, cloud storage, reliability, data platform, tooling, self-service analytics, data processing infrastructure, query performance, cost management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>aa015612-5ff</externalid>
      <Title>Product &amp; Solutions Lead, Safety and Security</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Product &amp; Solutions Lead, Safety and Security</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Intelligence &amp; Investigations</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$288K – $425K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Intelligence &amp; Investigations (I2) team detects and disrupts abuse and strategic risks so people can use AI safely. We translate real-world signals, investigations, and external threat intelligence into practical mitigations, operating guidance, and partner-ready support that improves safety outcomes across the AI ecosystem.</p>
<p><strong>About the Role</strong></p>
<p>As a Product &amp; Solutions Lead focused on safety and security, you will build and operate 0–1 products, services, and technical solution packages that help developers and public institutions move from experimentation to durable, trusted outcomes—while maintaining public safety, transparency, and respect for privacy and rights.</p>
<p>This role balances two modes of delivery:</p>
<ol>
<li>Bespoke products and technical solutions for strategic internal and external partners, and</li>
</ol>
<ol>
<li>Scalable product and solution packages that can be reused broadly across partners and deployments.</li>
</ol>
<p>Training is a component of scale, but not the center of gravity. You will also ship reference implementations, playbooks, evaluation kits, and repeatable operating models that partners can adopt and operate.</p>
<p>You will work directly with engineers and a multidisciplinary group of safety and geopolitical analysts, and data and quantitative scientists to convert complex, evolving challenges into solutions that teams can adopt in high-stakes environments.</p>
<p>This role is based in San Francisco, CA (hybrid, 3 days/week). Relocation support is available.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Own the 0–1 roadmap for safety and security solution offerings: define the target users, problem statements, tools, operating models, success metrics, and the set of reusable deliverables we ship.</li>
</ul>
<ul>
<li>Design and ship bespoke technical solutions for priority partners (internal and external), then abstract what works into reusable patterns and toolkits.</li>
</ul>
<ul>
<li>Build partner-ready technical artifacts: solution blueprints, reference architectures, evaluation and monitoring guidance, incident/response playbooks, and deployment checklists.</li>
</ul>
<ul>
<li>Package open-source and proprietary capabilities into adoption-ready solutions (e.g., reference implementations, configuration patterns, validated workflows).</li>
</ul>
<ul>
<li>Maintain a consistent delivery model across engagements: intake, scoping, governance alignment, execution cadence, and retrospectives that improve the offering over time.</li>
</ul>
<ul>
<li>Translate evolving threats into actionable guidance and updates for solution packages (e.g., scams/fraud patterns, cyber-enabled threats, ecosystem abuse trends).</li>
</ul>
<ul>
<li>Develop lightweight enablement components as needed: targeted technical modules, hands-on labs, and readiness assessments that accelerate adoption of the solutions.</li>
</ul>
<ul>
<li>Define and instrument impact measurement: adoption milestones, readiness indicators, reliability and safety posture improvements, and partner satisfaction with outputs.</li>
</ul>
<ul>
<li>Partner closely across engineering, safety, geopolitical analysis, and quantitative teams to ensure solutions are technically credible, threat-informed, and measurable.</li>
</ul>
<ul>
<li>Communicate crisply and decision-readily to internal and external stakeholders: progress, trade-offs, risks, and recommendations.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 6+ years in product, technical program leadership, solutions, or platform operations, especially in safety, security, risk, integrity, or enterprise/public-sector contexts.</li>
</ul>
<ul>
<li>Have built 0–1 solution offerings (product plus services or productized services): taking ambiguous needs, shipping something concrete, then scaling it into a repeatable model.</li>
</ul>
<ul>
<li>Have a builder’s mindset: comfortable incubating early-stage ideas, testing them with partners, and evolving them into durable, repeatable safety and security solutions.</li>
</ul>
<ul>
<li>Can go deep with engineers and still produce partner-ready artifacts that are clear</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$288K – $425K</Salaryrange>
      <Skills>product leadership, technical program leadership, solutions, platform operations, safety, security, risk, integrity, enterprise/public-sector contexts, product development, solution development, technical writing, communication, project management, team leadership, collaboration, problem-solving, analytical skills, data analysis, data visualization, machine learning, artificial intelligence, cybersecurity, threat intelligence, incident response, compliance, regulatory affairs, cloud computing, containerization, DevOps, agile development, scrum, kanban, continuous integration, continuous deployment, continuous testing, test automation, security testing, penetration testing, vulnerability assessment, compliance testing, regulatory testing, data protection, information security, cybersecurity frameworks, risk management, compliance management, regulatory compliance, data governance, information governance, data quality, data integrity, data validation, data verification, data certification, data assurance, data security, data encryption, data masking, data tokenization, data anonymization, data pseudonymization, data aggregation, data fusion, data integration, data warehousing, data mart, data lake, data catalog, data governance, data quality, data integrity, data validation, data verification, data certification, data assurance, data security, data encryption, data masking, data tokenization, data anonymization, data pseudonymization, data aggregation, data fusion, data integration, data warehousing, data mart, data lake, data catalog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing and applying artificial intelligence in a way that benefits humanity. It was founded in 2015 and has since grown to become one of the leading AI research and development companies in the world.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/c664cc09-d996-450c-8683-ad591ac27c11</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>331a8e82-0ec</externalid>
      <Title>Applied Data Science &amp; Machine Learning Leader - Global Forecasting</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Data Science</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$347K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Strategic Finance team at OpenAI plays a critical role in shaping the company’s long-term trajectory. We partner closely with Product, Engineering, and Go-to-Market teams to inform high-stakes decisions through rigorous data science and economic modeling.</p>
<p>As part of our expanding Data Science function, we’re building a best-in-class forecasting capability to drive real-time, data-driven decision-making across user growth, revenue, compute infrastructure, and more.</p>
<p>We are developing a scalable forecasting platform to help us understand and anticipate business dynamics in an increasingly complex, usage-based world. Our models are foundational to planning, pricing, operational efficiency, and growth strategy — supporting key investment decisions and unlocking OpenAI’s full potential.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for a hands-on technical leader to build and lead a small but mighty team of applied data scientists and ML engineers to develop forecasting capabilities and platforms from the ground up.</p>
<p>Your team will be responsible for building and scaling robust, interpretable, and production-ready forecasting systems. These models will power critical business decisions by predicting core metrics such as DAU/WAU, revenue, compute consumption, and profitability.</p>
<p>You will work closely with Strategic Finance leaders to integrate these scalable forecasting capabilities into their operational rhythms and collaborate with the Growth team to forecast key KPIs.</p>
<p>This is a highly cross-functional role that requires technical excellence, strong product intuition, and business acumen. You’ll collaborate with various partners to operationalize forecasting insights, influence company-wide strategy, and build foundational forecasting capabilities at OpenAI.</p>
<p>This role is based in San Francisco, CA. We follow a hybrid work model of three days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build and manage a world-class team of applied data scientists and ML engineers to develop forecasting platforms at scale.</li>
<li>Design and own the roadmap for the forecasting platform in partnership with cross-functional stakeholders.</li>
<li>Collaborate closely with Strategic Finance teams to ensure forecasts are well integrated into planning processes, and executive decision-making.</li>
<li>Work closely with cross-functional partners to help them adopt scientific, automated forecasting solutions.</li>
<li>Own the end-to-end modeling lifecycle, including scoping, feature engineering, model development and prototyping, experimentation, deployment, monitoring, and explainability.</li>
<li>Research and evaluate emerging tools and techniques in the forecasting space.</li>
<li>Translate technical outputs into business-aligned recommendations and decision frameworks.</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>Extensive experience building production-ready models for time series applications.</li>
<li>Proven track record of building adjustable and explainable forecast models for multiple planning cycles.</li>
<li>10+ years of applied data science &amp; engineering experience, with deep hands-on expertise in forecasting and predictive modeling.</li>
<li>Demonstrated experience with model monitoring, debugging, and long-term maintenance in production environments.</li>
<li>Excellent communication and storytelling skills — able to simplify complexity and influence executive stakeholders.</li>
<li>Team leadership experience with a track record of building high-performing, engaged teams.</li>
<li>An advanced degree (MS or PhD) in a quantitative field (e.g., Statistics, Computer Science, Economics, Operations Research).</li>
<li>A self-directed, intellectually curious mindset and comfort leading ambiguous projects from 0→1.</li>
<li>The ability to thrive in a dynamic environment — flexible, resourceful, and willing to do what it</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$347K – $490K</Salaryrange>
      <Skills>Applied Data Science, Machine Learning, Forecasting, Predictive Modeling, Time Series Analysis, Data Science, Engineering, Statistics, Computer Science, Economics, Operations Research, Leadership, Communication, Storytelling, Team Management, Project Management, Data Visualization, Machine Learning Engineering, Cloud Computing, Big Data, Data Warehousing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing artificial intelligence (AI) systems. It was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/5d856b70-b7b9-4f03-8a29-748b3d9dc61f</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>f23f6e54-b82</externalid>
      <Title>Engineering Manager, Identity Infrastructure</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Engineering Manager, Identity Infrastructure</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$293K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>Identity is the foundation of trust in an AI-powered world. As people build a rich personal and organisational context in ChatGPT, they need systems that protect it, remember it, and let them share it securely across apps and agents. Without strong identity, context stays locked away or exposed. With it, AI becomes safe, personal, and truly useful, empowering users while keeping control firmly in their hands. Our team builds the trusted OpenAI account layer that connects people, enterprises, developers, and agents to OpenAI apps and the broader AI ecosystem. Our systems are the front door to OpenAI—held to an exceptionally high bar for availability, security, and latency—and are depended on by many internal teams.</p>
<p><strong><strong>About the Role</strong></strong></p>
<p>As an Engineering Manager, Identity Infrastructure, you will lead a team owning foundational identity services that must be highly reliable, secure, and fast under extreme load. You’ll set direction, grow and support the team, and partner closely across engineering and security to strengthen the platform that enables trusted access to OpenAI products.</p>
<p>We’re looking for people who combine strong infrastructure depth with people leadership—someone who can guide technical strategy, run reliable operations, and build a high-performing team delivering mission-critical systems.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong><strong>What You’ll Do:</strong></strong></p>
<ul>
<li>Build, lead, and develop a high-performing engineering team. Coach and mentor engineers and senior technical leads to increase their impact and support their growth.</li>
</ul>
<ul>
<li>Define and own the technical strategy, roadmap, and execution for OpenAI’s core identity systems.</li>
</ul>
<ul>
<li>Guide the architecture of distributed systems operating at massive scale, with high requirements for reliability, privacy, and performance.</li>
</ul>
<ul>
<li>Partner closely with Product and Infrastructure teams to support new product needs and uphold high system standards.</li>
</ul>
<ul>
<li>Ensure the team operates effectively, maintains strong engineering rigor, and consistently delivers against ambitious goals.</li>
</ul>
<ul>
<li>Establish processes and a culture that enable fast learning, high trust, and responsible innovation.</li>
</ul>
<ul>
<li>Lead the team responsible for the authentication stack and token exchange infrastructure, ensuring high availability, strong security posture, and low-latency performance.</li>
</ul>
<ul>
<li>Own and improve request validation at the edge for logged-in users, delivering extremely high reliability and fast validation under high throughput.</li>
</ul>
<ul>
<li>Oversee systems that enable secure access to user data, improving scalability, reliability, and access patterns for internal consumers across the company.</li>
</ul>
<p><strong><strong>You might thrive in this role if you:</strong></strong></p>
<ul>
<li>Have 3+ years of experience managing high-performance engineering teams in fast-paced, rapidly shifting environments.</li>
</ul>
<ul>
<li>Have 10+ years of experience building, scaling, and operating distributed systems at large scale.</li>
</ul>
<ul>
<li>Care deeply about people leadership, coaching, and helping engineers achieve peak performance.</li>
</ul>
<ul>
<li>Communicate clearly, think holistically, and make decisions grounded in both user needs and long-term sustainability.</li>
</ul>
<ul>
<li>Have experience managing an infrastructure team and are effective at hiring, coaching, and developing engineers while maintaining strong operational excellence.</li>
</ul>
<ul>
<li>Have deep experience building and operating high-scale backend systems, with a track record of improving reliability, latency, and security in production.</li>
</ul>
<ul>
<li>Bring broad infrastructure expertise across the stack—from hardware and cluster management to services and storage layers.</li>
</ul>
<p><strong><strong>About OpenAI</strong></strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of artificial general intelligence and work to align its development with human values. Our mission is to create a world where AI is a powerful tool for humanity, and we believe that this requires a multidisciplinary approach to research and development. We are committed to transparency and open communication, and we believe that sharing our knowledge and insights with the world is essential to advancing the field of AI. We are a team of researchers, engineers, and designers who are passionate about creating a future where AI is a force for good. We are based in San Francisco, California, and we are proud to be a part of the vibrant tech community in the Bay Area. We are committed to diversity, equity, and inclusion, and we believe that a diverse and inclusive team is essential to creating a better future for all. We are always looking for talented individuals who share our vision and values, and we encourage you to join our team and help us shape the future of AI.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$293K – $385K</Salaryrange>
      <Skills>Engineering Management, Identity Infrastructure, Distributed Systems, Reliability, Privacy, Performance, Technical Strategy, Roadmap, Execution, Architecture, Cluster Management, Services, Storage Layers, High-Performance Engineering Teams, People Leadership, Coaching, Hiring, Developing Engineers, Operational Excellence, Backend Systems, Reliability, Latency, Security, Cloud Computing, Containerization, DevOps, Agile Methodologies, Scrum, Kanban, Continuous Integration, Continuous Deployment, Continuous Testing, Infrastructure as Code, Automation, Scripting, Programming Languages, Data Structures, Algorithms, Computer Networks, Database Systems, Data Warehousing, Business Intelligence, Data Science, Machine Learning, Artificial Intelligence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/3ee22a2b-17f6-4848-93e4-b43e61169002</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>ad85f323-9d3</externalid>
      <Title>Software Engineer III – Oracle Techno-Functional Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer III to join our team. As a techno-functional expert, you will design, develop, customize, and test Oracle R12.2 solutions aligned with business requirements. Your responsibilities will include serving as a techno-functional expert across Oracle Finance and supply chain modules, analyzing and mapping business processes, functional requirements, and system capabilities, and delivering enhancements, extensions, and integrations per Oracle best practices.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Design, develop, customize, and test Oracle R12.2 solutions aligned with business requirements.</li>
<li>Serve as techno-functional expert across Oracle Finance and supply chain modules.</li>
<li>Analyze and map business processes, functional requirements, and system capabilities.</li>
<li>Deliver enhancements, extensions, and integrations per Oracle best practices.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>10-12 years&#39; experience as an Oracle Techno-Functional Consultant/Engineer.</li>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Information Technology, Engineering, or related field.</li>
<li>Strong hands-on experience with Oracle R12.2 EBS.</li>
<li>Strong understanding of finance &amp; Supply chain processes (GL, AR, AP etc.,).</li>
<li>Proficiency in Oracle technical tools: SQL, PL/SQL, Oracle Forms/Reports, OAF/ADF, BI Publisher, Workflow Builder.</li>
<li>Experience in building customizations, conversions, extensions, and integrations.</li>
<li>Excellent analytical, problem-solving, and debugging skills.</li>
<li>Strong communication skills, able to work cross-functionally.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Oracle R12.2 EBS, Oracle Finance and supply chain modules, SQL, PL/SQL, Oracle Forms/Reports, OAF/ADF, BI Publisher, Workflow Builder, Experience with modern cloud platforms (AWS, Azure), Functional exposure to Zuora (billing/subscription management), Technical experience with Snowflake (data warehousing) and Power BI (reporting/analytics), Working knowledge on Fusion Financials, Experience in building and deploying solutions with GenAI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-3-Oracle-Techno-Functional-Engineer/212344</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-02-06</Postedate>
    </job>
    <job>
      <externalid>03b5d4bd-eb5</externalid>
      <Title>Data Engineer II - Mobile Growth</Title>
      <Description><![CDATA[<p>As a Data Engineer II - Mobile Growth, you will support some of our largest game titles by helping us understand player engagement and measure the effectiveness of our marketing efforts. We are looking for an experienced Data Engineer with broad technical skills and ability to work with large amounts of data.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>You will work with analysts, understand requirements, and develop technical specifications for ETLs</p>
<ul>
<li>You will implement efficient, scalable and reliable data pipelines to move and transform data.</li>
<li>You will promote strategies to improve our data modelling, quality and architecture</li>
<li>You will work with big data solutions, ETL pipelines and dashboard tools.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>5+ years relevant industry experience in a data engineering role and graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field</li>
<li>Proficiency in writing SQL queries and knowledge of cloud-based databases like Snowflake, Redshift, BigQuery or other big data solutions</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,600 - $167,300 CAD</Salaryrange>
      <Skills>SQL, cloud-based databases, data modelling, ETL processes, data warehousing, Python, data pipeline tools, version control systems, containerization, orchestration technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-II-Mobile-Growth/211355</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-01-05</Postedate>
    </job>
  </jobs>
</source>