<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>a5ab4ff8-a1f</externalid>
      <Title>Vice President Global Accounting / CAO (all genders)</Title>
      <Description><![CDATA[<p>Job Description</p>
<p>About tonies: tonies is the globally leading interactive audio platform for children with more than 10 million Tonieboxes and 125 million Tonies sold. The intuitive and award-winning audio system has changed the way young children play and learn independently with its child-safe, wireless, and screen-free approach. Tonieboxes have been activated in over 100 countries, the content portfolio includes around 1,400 Tonies in several languages.</p>
<p>You as part of tonies: Are you ready to lead our global financial heartbeat? In this pivotal role, reporting directly to our CFO, you will take full ownership of our global accounting landscape. You will transform our financial operations into a best-in-class global function, overseeing the transition to a unified global structure and spearheading our Global SAP implementation. Based in Düsseldorf with a flexible hybrid setup, you will lead your team through one of the most exciting scaling phases in our history.</p>
<p>Your tasks and responsibilities will include:</p>
<ul>
<li>Global Accounting Ownership: Lead and unify the global accounting function, ensuring seamless reporting and compliance across all international markets.</li>
<li>Digital Excellence: Act as the key business owner for our Global SAP S/4HANA Introduction, ensuring scalability through process automation and standardization, in collaboration with the CIO.</li>
<li>Team Leadership: Manage and mentor a team of 20+ people (5 direct reports), fostering a culture of excellence and international collaboration.</li>
<li>Closing &amp; Compliance: Manage the periodic and consolidated closing processes (IFRS/HGB) and maintain gold-standard relationships with external auditors and partners.</li>
<li>Best Practices: Know what great looks like in a capital market environment. Lead by example and drive subject matter expertise through own skill and process knowledge. Passionate about bringing accounting standards into practice. Affinity for financial data transformation in a diverse environment, shaping tonies&#39; global best practices and standards.</li>
<li>Corporate Finance Support: Partner with the CFO and CFO staff on strategic initiatives, including refinancing and potential M&amp;A activities.</li>
</ul>
<p>What we are looking for:</p>
<ul>
<li>The Seasoned Expert: 15+ years of finance experience, with a heavy weight on Global Accounting and Audit. While Big 4 experience is a plus, proven corporate experience in a fast-paced environment is essential. Significant experience with Capital Markets is highly preferred.</li>
<li>The Global Mindset: Extensive experience in managing international accounting structures and a deep understanding of global financial compliance.</li>
<li>The Transformation Driver: You have successfully led or played a major role in a Global SAP implementation and know how to build scalable processes.</li>
<li>The People Leader: You are passionate about mentoring professionals and have a track record of leading teams of 20+ people through change.</li>
<li>The Communicator: Excellent command of English is a must; German language skills are required (professional proficiency, does not need to be native).</li>
<li>The Local Presence: You are based in or willing to relocate to the Düsseldorf area to be present in the office 3–4 days a week.</li>
</ul>
<p>Why tonies?</p>
<ul>
<li>Global Teamwork: We collaborate across departmental and country borders on our vision to bring the Toniebox into every child&#39;s room in the world.</li>
<li>Come as you are: This applies not only to the dress code but also to everything else. Because only where you truly feel comfortable can you give your best.</li>
<li>Mobility: Choose the option that suits you best - a Deutschlandticket (public transport ticket) for unlimited mobility, a monthly contribution for an office parking space, a leasing bicycle, or a remote work subsidy.</li>
<li>Enhanced Security: Benefit from subsidies for company pension plans, occupational pension schemes, and occupational disability insurance.</li>
<li>Rest &amp; Time Off: Enjoy 30 days of paid annual leave as well as three additional days off such as Rosenmontag, Christmas Eve, and New Year&#39;s Eve. After one year of employment, you can also use up to 10 &#39;toniecation days&#39; (unpaid leave days).</li>
<li>Continuous Learning: Benefit from our internal and external training opportunities as well as an individual learning budget to continuously expand your knowledge.</li>
<li>Language Learning &amp; Relaxation: Improve your communication skills with the language learning app Babbel and find relaxation through our access to the meditation app Calm.</li>
<li>Discounts: Benefit from attractive discounts on our entire range of tonies products.</li>
</ul>
<p>Good to know: As part of our principles, we are committed to supporting inclusion and diversity at tonies. We actively celebrate our colleagues&#39; different abilities, ethnicities, faith and gender. Everyone is welcome and supported in their development at all stages in their journey with us.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Global Accounting, Audit, Capital Markets, SAP S/4HANA, Process Automation, Standardization,  клуб Team Leadership, Closing &amp; Compliance, Financial Data Transformation</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>tonies GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/tonies.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Tonies is a globally leading interactive audio platform for children with over 10 million Tonieboxes and 125 million Tonies sold.</Employerdescription>
      <Employerwebsite>https://tonies.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://tonies.jobs.personio.com/job/2570426</Applyto>
      <Location>Düsseldorf · Hybrid (Düsseldorf + Home Office)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eb830261-e76</externalid>
      <Title>Senior Software Engineer, Connectivity</Title>
      <Description><![CDATA[<p>We&#39;re looking for a senior engineer with deep experience building robust platforms. As a member of the Connectivity team, you&#39;ll design and own foundational platform systems that support scalable data generation, evaluation, and bespoke customer delivery across Scale&#39;s ecosystem.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Designing extensible, production-grade services that can support frontier AI workflows, including multi-modal inputs, long-running processes, and agentic orchestration.</li>
<li>Building and operating distributed systems at scale, with strong guarantees around correctness, reliability, observability, and cost efficiency.</li>
<li>Integrating with public LLM APIs and AI services, designing abstractions that are resilient to model churn, latency variability, and evolving usage patterns.</li>
<li>Designing and maintaining data transformation and processing systems, supporting complex schema evolution, customization, and high-throughput workloads.</li>
<li>Partnering closely with infrastructure, product, and customer-facing teams to define requirements, shape technical direction, and deliver seamless integration experiences for customers.</li>
<li>Leading multi-quarter technical initiatives, including authoring and driving a 6+ month technical roadmap for major platform investments.</li>
<li>Applying strong engineering judgment in ambiguous problem spaces, balancing speed with long-term maintainability and operational excellence.</li>
<li>Raising the quality bar through thoughtful system design reviews, rigorous code reviews, and mentorship grounded in real-world production experience.</li>
</ul>
<p>Ideal Experience:</p>
<ul>
<li>7+ years of professional software engineering experience, with a strong background in building and operating large-scale, production-grade platforms.</li>
<li>Deep expertise in distributed systems and cloud-native architectures, including Kubernetes, microservices, event-driven systems, caching, and production databases.</li>
<li>Proven ability to lead multi-quarter technical initiatives, work effectively across cross-functional teams, and apply strong architectural judgment in ambiguous environments.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>distributed systems, cloud-native architectures, Kubernetes, microservices, event-driven systems, caching, production databases, LLM APIs, AI services, data transformation, processing systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4654275005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>879783fa-e08</externalid>
      <Title>Sr. Product Manager, Data Engineering</Title>
      <Description><![CDATA[<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. Data Engineering is foundational and among the largest scale workloads on the Databrick Data Intelligence Platform. We are reinventing Data Engineering with Lakeflow - a unified product and experience for simple data ingestion, declarative data transformation, and real-time streaming.</p>
<p>In this role, you will lead product management for a core Lakeflow product area. You will own and drive all aspects of product management including vision, strategy, roadmap, execution, and go-to-marketing. In addition, you will partner closely with various Databricks product teams to enable Data Engineering for the overall Databrick product portfolio including data science, data warehousing, business intelligence, and machine learning products.</p>
<p>The impact you will have includes leading product management for one of the fastest growing products and businesses at Databricks, making company-wide impact by driving Data Engineering across the Databrick product portfolio, developing and deepening understanding of and expertise in Data Engineering, defining, shaping, and driving the future of data processing, data applications, and data pipelines, and owning the full life cycle of product development from ideation to requirements, development, pricing, launch, and go-to-market.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$115,400-$204,200 USD</Salaryrange>
      <Skills>product management, data engineering, Lakeflow, data ingestion, declarative data transformation, real-time streaming, product vision, product strategy, roadmap development, execution, go-to-marketing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/6322654002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>df625390-342</externalid>
      <Title>AI Analytics Engineer (Marketing Analytics)</Title>
      <Description><![CDATA[<p>We&#39;re seeking an AI Analytics Engineer to join our Data Science &amp; Analytics team. As a high-impact, early-career role, you will be responsible for building the canonical data infrastructure, owning critical dashboards, and enabling Marketing stakeholders to execute faster, more confident, data-driven decisions.</p>
<p>You will design and maintain trustworthy data models for core marketing metrics, manage the full lifecycle from prototyping through production, and develop and govern dbt data pipelines. You will also build and optimize dashboards that deliver real-time, self-serve insights across high-priority marketing areas, drive data independence for Marketing stakeholders, and collaborate with the Marketing team and data partners to establish the AI Business Context layer for marketing use cases.</p>
<p>You will serve as the primary data partner for marketing managers, demand generation teams, and leadership, translating complex data insights into clear business recommendations via dashboards, memos, and presentations. You will achieve a comprehensive mastery of Airtable&#39;s marketing data models, existing pipelines, and BI tools within the first 6 months, becoming the definitive internal expert.</p>
<p>This is a genuinely AI-native role, requiring active, demonstrated daily use of AI coding tools such as Cursor, Claude, ChatGPT, and Gemini. You must provide specific, concrete examples of how these tools are integral to your work, moving beyond simple familiarity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Expert-level SQL, Proficiency with dbt or equivalent data transformation tools, Experience with BI and visualization platforms (Looker, Omni, Tableau, Hex, or similar), Active, demonstrated daily use of AI coding tools (Cursor, Claude, ChatGPT, Gemini), Mandatory use of GitHub for version control in a standard development workflow, Python for data work (pandas, ETL scripting, or analysis), Prior exposure to marketing data concepts: attribution, funnel metrics, lead scoring, or campaign performance, Familiarity with CRM (Salesforce) or marketing automation platforms (Marketo), Experience with Databricks or cloud data warehouses, A public portfolio showcasing data or AI-assisted engineering work (GitHub, personal projects, Kaggle)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8434307002</Applyto>
      <Location>San Francisco, CA; Austin, TX; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3beddc8f-183</externalid>
      <Title>Staff Data Systems Analyst</Title>
      <Description><![CDATA[<p>At ZoomInfo, we&#39;re looking for a Senior Data Systems Analyst to join our team. As a key member of our data operations team, you&#39;ll be responsible for building deep expertise in our company data pipeline, which ingests, processes, and profiles millions of company records. Your primary focus will be on mastering our pipeline architecture, contributing to our infrastructure transition, and leading strategic data improvement initiatives.</p>
<p>In your first 6-12 months, you&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth. As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Mastering our company data pipeline architecture, including how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>
<li>Reading and analyzing production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>
<li>Developing frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>
<li>Creating clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>
</ul>
<ul>
<li>Contributing to pipeline evolution and infrastructure improvements by participating in design conversations with Engineering and Product, validating pipeline improvements through rigorous testing, and translating data quality investigations and emerging requirements into system-level improvement opportunities</li>
</ul>
<ul>
<li>Solving complex, ambiguous data challenges by leading or contributing to data improvement initiatives that require both systems thinking and creative problem-solving</li>
</ul>
<ul>
<li>Building partnerships and institutional knowledge by developing strong working relationships with Data Acquisition, Product, Engineering, and fellow data analysts, conducting impact analyses and validation studies, and documenting your learning, approaches, and insights</li>
</ul>
<p>We&#39;re looking for a highly skilled individual with a strong background in data analytics, data engineering, or related technical roles. You should have experience working with data pipelines, ETL systems, or data processing infrastructure, and be able to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility.</p>
<p>Required qualifications include:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>
<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>
<li>Experience working with data pipelines, ETL systems, or data processing infrastructure</li>
<li>Ability to read and understand code (Python, Java, SQL, or similar)</li>
<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>
<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>
<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>
<li>Strong analytical skills with ability to investigate complex issues systematically</li>
<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>
<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>
</ul>
<p>Preferred qualifications include experience with company data, business data, web data acquisition, or data quality initiatives, as well as experience with data profiling, entity resolution, record linkage, or data matching systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data analytics, data engineering, data pipelines, ETL systems, data processing infrastructure, Python, Java, SQL, data transformation, system logic, technical feasibility</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo provides software solutions for sales and marketing professionals.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8408622002</Applyto>
      <Location>Vancouver, Washington, United States; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc923f59-e03</externalid>
      <Title>Senior Data Engineering Analyst</Title>
      <Description><![CDATA[<p>ZoomInfo is where careers accelerate. We move fast, think boldly, and empower you to do the best work of your life. You&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>
<p>We&#39;re seeking a Senior Data Systems Analyst to become the expert on our company data pipeline,the system that ingests, processes, and profiles millions of company records that power our customers&#39; go-to-market strategies. In this role, you&#39;ll build deep expertise in how our company data flows from acquisition through profiling and output. You&#39;ll read code to understand data transformations and system dependencies, bring informed opinions to design conversations with Engineering and Product, and help shape the evolution of our next-generation data infrastructure.</p>
<p>As you build mastery of our systems, you&#39;ll increasingly lead strategic data improvement initiatives that require both systems thinking and creative problem-solving. This isn&#39;t about building dashboards or SQL reports. This is about understanding data systems at an architectural level, solving ambiguous data challenges, and ensuring our pipeline infrastructure continuously evolves to meet customer needs and maintain competitive advantage.</p>
<p>You&#39;ll work closely with other data analysts during an active infrastructure transition period, and as systems stabilize and your expertise deepens, you&#39;ll progressively own more of the pipeline architecture and strategic initiatives. This is a role with significant growth runway for someone who wants to become the go-to technical expert on company data systems.</p>
<p><strong>Who You Are</strong></p>
<p>Systems Thinker with Technical Depth: You understand how data systems work, not just what they produce. You&#39;ve worked with data pipelines, ETL systems, or data processing infrastructure,maybe you&#39;ve improved one, debugged one, or owned components of one. You can read code (Python, Java, SQL, or similar) well enough to understand data transformations and trace how data flows through systems.</p>
<p>Opinionated Technical Contributor: You don&#39;t just execute,you have informed opinions on how things should work. You can assess technical tradeoffs, evaluate whether a proposed solution is feasible, and contribute meaningfully to design conversations with engineers.</p>
<p>Growth-Oriented Problem Solver: You&#39;re excited to build deep expertise in a complex domain and grow into leading strategic initiatives. You&#39;ve tackled ambiguous problems that required figuring things out as you went, and you want to expand your project leadership capabilities in a systems-focused environment.</p>
<p>Analytical and Hands-On: You&#39;re equally comfortable writing code to analyze data patterns and manually investigating edge cases to understand what&#39;s really happening. You dig into details when needed and know when to zoom out to see the bigger picture.</p>
<p>Clear Communicator: You can explain technical complexity to non-technical audiences. You&#39;ve worked effectively with Engineering, Product, or cross-functional teams, translating between technical constraints and business needs.</p>
<p>Comfortable with Ambiguity: You thrive in evolving environments where priorities shift and problems aren&#39;t always well-defined. You maintain momentum and quality even when the path forward isn&#39;t perfectly clear.</p>
<p><strong>What You&#39;ll Do</strong></p>
<p>In your first 6-12 months, your primary focus will be building deep expertise in our pipeline architecture and contributing to our infrastructure transition. You&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth.</p>
<p>As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>
<p><strong>Build Deep Pipeline &amp; Systems Expertise</strong></p>
<ul>
<li>Master our company data pipeline architecture,how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>
</ul>
<ul>
<li>Read and analyze production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>
</ul>
<ul>
<li>Develop frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>
</ul>
<ul>
<li>Create clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>
</ul>
<p><strong>Contribute to Pipeline Evolution &amp; Infrastructure Improvements</strong></p>
<ul>
<li>Participate actively in design conversations with Engineering and Product about our next-generation pipeline, bringing data quality insights, technical feasibility assessments, and informed opinions on architectural decisions</li>
</ul>
<ul>
<li>Help validate pipeline improvements through rigorous testing, impact analysis, and hands-on verification of data quality</li>
</ul>
<ul>
<li>Translate data quality investigations and emerging requirements into system-level improvement opportunities</li>
</ul>
<ul>
<li>Collaborate with team members to determine when problems should be solved at the pipeline/profiler level versus through downstream approaches</li>
</ul>
<p><strong>Solve Complex, Ambiguous Data Challenges</strong></p>
<ul>
<li>Lead or contribute to data improvement initiatives that require both systems thinking and creative problem-solving,such as improving location verification across international markets, integrating new data sources, or solving novel data extraction challenges</li>
</ul>
<ul>
<li>Tackle problems where the solution isn&#39;t obvious through a blend of code analysis, manual investigation, cross-functional coordination, and iterative problem-solving</li>
</ul>
<ul>
<li>Build and apply repeatable approaches to testing, validation, and root cause analysis</li>
</ul>
<p><strong>Build Partnerships &amp; Institutional Knowledge</strong></p>
<ul>
<li>Develop strong working relationships with Data Acquisition, Product, Engineering, and fellow data analysts</li>
</ul>
<ul>
<li>Conduct impact analyses and validation studies to ensure proposed changes deliver intended outcomes</li>
</ul>
<ul>
<li>Document your learning, approaches, and insights so knowledge is shared and institutional memory builds across the team</li>
</ul>
<ul>
<li>Serve as a technical resource as you develop expertise, helping bridge immediate data quality needs with long-term pipeline capabilities</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>
</ul>
<ul>
<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>
</ul>
<ul>
<li>Experience working with data pipelines, ETL systems, or data processing infrastructure,you understand how data moves through systems and what can go wrong</li>
</ul>
<ul>
<li>Ability to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility</li>
</ul>
<ul>
<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>
</ul>
<ul>
<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>
</ul>
<ul>
<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>
</ul>
<ul>
<li>Strong analytical skills with ability to investigate complex issues systematically</li>
</ul>
<ul>
<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>
</ul>
<ul>
<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>
</ul>
<p><strong>Strongly Preferred</strong></p>
<ul>
<li>Experience with company data, business data, web data acquisition, or data quality initiatives</li>
</ul>
<ul>
<li>Experience with data profiling, entity resolution, record linkage, or data matching systems</li>
</ul>
<ul>
<li>Background contribution</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data analysis, data pipelines, ETL systems, data processing infrastructure, Python, Java, SQL, data transformation, system dependencies, data quality, data profiling, entity resolution, record linkage, data matching</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo provides software solutions for sales and marketing professionals.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8408637002</Applyto>
      <Location>Vancouver, Washington, United States; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c4cc3bc0-a5d</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p><strong>Job Title: Senior Analytics Engineer</strong></p>
<p>You&#39;ll be part of a team that empowers you to do the best work of your life. As a Senior Analytics Engineer at ZoomInfo, you&#39;ll be responsible for building deep expertise in our company data pipeline architecture.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Master our company data pipeline architecture,how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>
<li>Read and analyze production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>
<li>Develop frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>
<li>Create clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>
</ul>
<p><strong>What You&#39;ll Do:</strong></p>
<p>In your first 6-12 months, your primary focus will be building deep expertise in our pipeline architecture and contributing to our infrastructure transition. You&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth.</p>
<p>As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>
<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>
<li>Experience working with data pipelines, ETL systems, or data processing infrastructure,you understand how data moves through systems and what can go wrong</li>
<li>Ability to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility</li>
<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>
<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>
<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>
<li>Strong analytical skills with ability to investigate complex issues systematically</li>
<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>
<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Experience with company data, business data, web data acquisition, or data quality initiatives</li>
<li>Experience with data profiling, entity resolution, record linkage, or data matching systems</li>
<li>Background contributing to</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data pipeline architecture, data transformation, ETL systems, data processing infrastructure, Python, SQL, data analysis, data manipulation, ambiguous data problems, data quality initiatives</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo provides sales intelligence and go-to-market solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8408633002</Applyto>
      <Location>Vancouver, Washington, United States; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e500a909-0d2</externalid>
      <Title>Senior Data Analyst</Title>
      <Description><![CDATA[<p>As a Senior Data Analyst, you will play a crucial role in our data-driven decision-making process. You will be responsible for turning raw data into actionable insights that will shape our product strategy and drive business growth.</p>
<p>This role requires a deep understanding of product analytics, a strong technical skillset, and a forward-thinking mindset to leverage AI in your analysis.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build insightful and user-friendly dashboards and reports in Tableau to track key product metrics and performance indicators.</li>
<li>Utilize Snowflake and dbt to build and maintain robust and scalable data models and pipelines for our analytics needs.</li>
<li>Conduct in-depth product analysis to identify trends, patterns, and opportunities for product improvement and growth.</li>
<li>Write complex SQL queries and use Python to perform advanced data analysis.</li>
<li>Collaborate with product managers, engineers, and other stakeholders to understand their data needs and provide them with the insights they need to make informed decisions.</li>
<li>Proactively identify and explore new ways to leverage AI and machine learning to enhance our analytical capabilities and unlock new insights from our data.</li>
<li>Communicate your findings and recommendations effectively to both technical and non-technical audiences.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Proven experience as a Data Analyst or in a similar role, with a focus on product analytics.</li>
<li>3-5 years of experience.</li>
<li>Expert-level proficiency in SQL.</li>
<li>Strong experience with data visualization tools, particularly Tableau.</li>
<li>Hands-on experience with cloud data warehouses like Snowflake and data transformation tools like dbt.</li>
<li>Advanced analytics, data science, AI/ML experience and techniques are a plus.</li>
<li>A strong analytical mindset with the ability to think critically and solve complex problems.</li>
<li>A proactive and curious mindset with a strong desire to learn and experiment with new technologies, especially in the realm of AI.</li>
<li>Excellent communication and collaboration skills.</li>
</ul>
<p>Added Advantage:</p>
<ul>
<li>Experience with other BI tools like Looker.</li>
<li>Experience with statistical analysis and machine learning.</li>
<li>Familiarity with product analytics platforms like Amplitude or Mixpanel</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Tableau, Snowflake, dbt, Python, Data Analysis, Data Visualization, Cloud Data Warehouses, Data Transformation, Advanced Analytics, Data Science, AI/ML, Statistical Analysis, Machine Learning, Product Analytics Platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7731263</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>be4a55b1-c38</externalid>
      <Title>Partner Enablement Scaled Programs Lead</Title>
      <Description><![CDATA[<p>As the Scaled Partner Enablement Lead, you will be part of a visionary team that is architecting our partner enablement strategy. You will design, build, and execute high-impact learning programs spanning technical certifications, sales readiness, specialization, and onboarding programs to ensure our partners sell and deliver Databricks solutions with total confidence.\n\nThis role demands a master of strategic planning and operational excellence. You will manage complex global initiatives, collaborate with cross-functional stakeholders, and translate business needs into world-class learning experiences. You’re joining a high-performance team that recently tripled our trained partner base in just 12 months, using innovative learning models to transform the Databricks ecosystem at scale.\n\n### Key Responsibilities\n\n<em>   Strategic Alignment: Partner with the leadership, partner-facing teams, and partners themselves to identify critical needs and translate business requirements into high-impact enablement programs.\n</em>   Performance Accountability: Drive and own the results for your partner portfolio, maintaining full accountability for key metrics, including training completion and certification targets.\n<em>   Build Learning Programs: Collaborate with internal Subject Matter Experts (SMEs) to define best-in-class enablement programs and establish rigorous success measures.\n</em>   Roadmap Management: Lead the roadmap for training initiatives, including large-scale learning events, global workshop schedules, and the Go-to-Market strategy for partner enablement programs.\n<em>   Operational Excellence: Oversee the end-to-end program lifecycle from initial intake and resourcing to scheduling, logistics, and task management, ensuring seamless execution.\n</em>   Data-Driven Insights: Monitor and analyze KPIs such as Certification rates, ROI, and program impact.\n<em>   Quality Assurance: Evaluate, audit, and coach cross-functional training resources to ensure the delivery of &quot;best-in-class&quot; learning experiences.\n\n### Minimum Qualifications\n\n</em>   Education: Bachelor’s degree in a technical discipline or equivalent practical experience.\n<em>   Experience: 6+ years of experience developing and running large-scale training and certification programs within a global tech ecosystem.\n</em>   Ecosystem Knowledge: Strong understanding of the System Integrator business model and how technical partners successfully go to market.\n<em>   Technical knowledge: Deep understanding of Data AI technologies, including topics as Data Warehouse, Data Transformation, Machine Learning, and Generative AI, ideally having been certified in at least one of these technologies.\n</em>   Program Management: Proven track record of managing technical competency models and learning journeys for geographically dispersed teams.\n<em>   Communication: Exceptional verbal and written communication skills with the ability to influence cross-functional stakeholders.\n\n### Other desired Qualifications\n\n</em>   Experience specifically within the Data + AI or Cloud Infrastructure space.\n<em>   Familiarity with scaling programs within the Cloud hypercallers such as Google, Microsoft, and AWS, or similar cloud partner ecosystems.\n</em>   Proficiency in data visualization tools to report on program ROI.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>strategic planning, operational excellence, program management, data analysis, communication, technical certifications, sales readiness, specialization, onboarding programs, Data AI technologies, Data Warehouse, Data Transformation, Machine Learning, Generative AI, Cloud Infrastructure, Google, Microsoft, AWS, data visualization tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It has over 10,000 customers worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7839714002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e4ad40eb-311</externalid>
      <Title>Partner Enablement Technical Lead</Title>
      <Description><![CDATA[<p>Databricks is a partner-first company. Our exponential growth is fueled by the success of our partner ecosystem. As the Partner Enablement Technical Lead, you will be a key architect of our global partner enablement strategy. You will design, build, and execute high-impact learning programs,ranging from technical readiness to advanced certifications,ensuring our partners can deliver Databricks projects with total confidence.</p>
<p>This role demands a master of strategic planning and operational excellence. You will manage complex global initiatives, collaborate with cross-functional stakeholders, and translate business needs into world-class learning experiences. You are joining a high-performance team that recently tripled our trained partner base in just 12 months, using innovative learning models to transform the Databricks ecosystem at scale.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Design Technical Enablement Programs: Architect comprehensive enablement programs such as Advanced Technical Academies, Delivery Specializations, and technical onboarding to drive partner technical proficiency. In collaboration with Subject Matter Experts, you will define end-to-end learning paths, encompassing curriculum, assessments, and success metrics</li>
</ul>
<ul>
<li>Strategic Alignment: Partner with leadership and partner-facing teams to identify critical skills gaps and translate business requirements into high-impact enablement initiatives.</li>
</ul>
<ul>
<li>Technical Roadmap Leadership: Own the roadmap for technical programs, including large-scale learning events, global workshop schedules, and GTM strategies for enablement.</li>
</ul>
<ul>
<li>Performance Accountability: Drive results for your partner portfolio, maintaining full ownership of key metrics such as training completion and certification targets.</li>
</ul>
<ul>
<li>Data-Driven Insights: Monitor and analyze KPIs (Certification rates, ROI, and program impact) to continuously optimize enablement effectiveness.</li>
</ul>
<ul>
<li>Quality Assurance &amp; Coaching: Oversee the technical integrity of programs launched across the team. Evaluate and coach cross-functional training resources to ensure &#39;best-in-class&#39; delivery.</li>
</ul>
<p><strong>Minimum Qualifications:</strong></p>
<ul>
<li>Education: Bachelor&#39;s degree in a technical discipline or equivalent practical experience.</li>
</ul>
<ul>
<li>Experience: 7+ years of experience developing and scaling technical learning programs within a global enterprise tech ecosystem.</li>
</ul>
<ul>
<li>Technical Depth: Deep understanding of Data &amp; AI technologies (Data Warehousing, Data Transformation, ML, and Generative AI); ideally holds at least one industry certification in these areas.</li>
</ul>
<ul>
<li>Consulting Background: Proven experience as part of a technical delivery team, successfully implementing Data &amp; AI projects for external clients.</li>
</ul>
<ul>
<li>Ecosystem Knowledge: Strong understanding of the System Integrator (SI) business model and how technical partners successfully go to market.</li>
</ul>
<ul>
<li>Program Management: Proven track record of managing technical competency models and learning journeys for large, geographically dispersed teams.</li>
</ul>
<ul>
<li>Communication: Exceptional verbal and written communication skills with the ability to influence senior cross-functional stakeholders.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Strong background in the Generative AI landscape.</li>
</ul>
<ul>
<li>Experience scaling technical enablement within &#39;Hyperscaler&#39; environments (AWS, Azure, or GCP) or similar high-growth cloud partner ecosystems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$154,300-$212,200 USD</Salaryrange>
      <Skills>Data &amp; AI technologies, Data Warehousing, Data Transformation, ML, Generative AI, Program Management, Technical Competency Models, Learning Journeys, Cross-Functional Stakeholders, Generative AI landscape, Hyperscaler environments, Cloud partner ecosystems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8435968002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b1fa4435-fc2</externalid>
      <Title>Business Systems Analyst, Data Enrichment</Title>
      <Description><![CDATA[<p>We are seeking a Business Systems Analyst, Data Enrichment to own and drive the strategy, architecture, and execution of our data enrichment ecosystem. This role sits at the intersection of Revenue Operations, Data Engineering, and Go-to-Market strategy, and is responsible for building and maintaining a best-in-class enrichment infrastructure that delivers a reliable, comprehensive source of truth for company and contact data across global markets.</p>
<p>You will be the subject matter expert and product owner for all enrichment tools, data sources, and processes,including platforms like Clay, Dun &amp; Bradstreet, ZoomInfo, and other third-party providers. You will design and operate the systems that power account hierarchies, firmographic enrichment, contact discovery, and signal detection, ensuring our GTM teams have the accurate, complete data they need to identify, prioritize, and close business.</p>
<p>This is a hands-on, technically-oriented role that requires deep experience working with large datasets, complex system integrations, and Salesforce data modeling. You will collaborate closely with Sales, Marketing, Data Science, Data Engineering, and Revenue Operations to ensure our enrichment strategy supports both near-term GTM execution and long-term data infrastructure goals.</p>
<p>Responsibilities:</p>
<ul>
<li>Own the end-to-end enrichment strategy and roadmap, serving as the product owner for all enrichment tools, vendors, and data sources including Clay, Dun &amp; Bradstreet, ZoomInfo, and emerging providers</li>
</ul>
<ul>
<li>Build and maintain a unified enrichment master,a reliable source of truth for company and person data including parent-child account hierarchies, firmographics, technographics, and contact intelligence across domestic and international markets</li>
</ul>
<ul>
<li>Design and implement waterfall enrichment workflows that orchestrate multiple data providers to maximize coverage, accuracy, and cost efficiency while minimizing redundancy</li>
</ul>
<ul>
<li>Architect enrichment data models within Salesforce, making strategic decisions about how enrichment data is stored, related, and surfaced (e.g., custom objects vs. direct field integration, parent account structures, enrichment audit trails)</li>
</ul>
<ul>
<li>Hands-on data manipulation and transformation,write queries, build data pipelines, and work directly with data warehouses (e.g., Snowflake, BigQuery) to clean, transform, match, and deduplicate enrichment data at scale</li>
</ul>
<ul>
<li>Lead international enrichment strategy, addressing the unique challenges of enriching company and contact data across global markets with varying data availability, provider coverage, and regulatory requirements</li>
</ul>
<ul>
<li>Partner with Data Science and Data Engineering to define enrichment schemas, resolve entity matching challenges, and build scalable infrastructure that supports both real-time and batch enrichment processes</li>
</ul>
<ul>
<li>Collaborate with Sales, Marketing, and Revenue Operations to understand GTM data needs, translate business requirements into enrichment solutions, and ensure enrichment outputs directly support pipeline generation, territory planning, lead routing, and account scoring</li>
</ul>
<ul>
<li>Define and track enrichment KPIs including match rates, data completeness, freshness, accuracy, and downstream GTM impact,using metrics to continuously improve the enrichment ecosystem</li>
</ul>
<ul>
<li>Evaluate and onboard new enrichment vendors and data sources, conducting proof-of-concept testing and negotiating contracts in partnership with procurement</li>
</ul>
<ul>
<li>Explore and implement AI-powered enrichment capabilities, including prompt-based enrichment using LLMs to supplement traditional data providers for emerging companies, startups, and hard-to-enrich segments</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>10+ years of experience in data enrichment, data operations, or revenue/marketing operations with hands-on ownership of enrichment tools and strategy in a B2B SaaS or enterprise technology environment</li>
</ul>
<ul>
<li>Deep expertise with enrichment platforms such as Clay, Dun &amp; Bradstreet (D-U-N-S, Data Blocks, hierarchies), ZoomInfo, Clearbit, People Data Labs, or comparable providers, including experience building waterfall enrichment workflows and enrichment masters</li>
</ul>
<ul>
<li>Strong Salesforce experience (required),including data modeling for enrichment (custom objects, account hierarchies, parent-child relationships), integration architecture, and understanding of how enrichment data flows through the CRM to support GTM processes</li>
</ul>
<ul>
<li>Hands-on technical skills for data manipulation including SQL proficiency, experience with data warehouses (Snowflake, BigQuery, or similar), and comfort working with ETL/reverse ETL pipelines, APIs, and data transformation tools</li>
</ul>
<ul>
<li>Strong product ownership mindset with experience managing roadmaps, backlogs, and stakeholder priorities,able to translate business needs into technical requirements and drive execution across cross-functional teams</li>
</ul>
<ul>
<li>Dual data + RevOps mindset,equally comfortable working with Data Science and Data Engineering on infrastructure and schema design as you are partnering with Sales and GTM teams on pipeline and territory optimization</li>
</ul>
<ul>
<li>Excellent communication skills to bridge technical and business audiences, lead stakeholder discovery sessions, and present enrichment strategy and impact to leadership</li>
</ul>
<p>Strong candidates may have:</p>
<ul>
<li>Experience building or leveraging AI-powered enrichment prompts (e.g., using LLMs to research and enrich company data, identify signals, or fill gaps where traditional providers lack coverage)</li>
</ul>
<ul>
<li>Familiarity with data quality and MDM (Master Data Management) frameworks and tools</li>
</ul>
<ul>
<li>Experience with routing and scoring tools such as LeanData, and marketing automation platforms</li>
</ul>
<ul>
<li>Background in startup signal detection,identifying high-potential early-stage companies through funding, hiring, technographic, and intent signals</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $190,000-$270,000 USD</p>
<p>Logistics</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you’re interested in this work. We think AI systems like the one</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,000-$270,000 USD</Salaryrange>
      <Skills>data enrichment, data operations, revenue/marketing operations, enrichment tools, enrichment strategy, salesforce, sql, data warehouses, etl/reverse etl pipelines, apis, data transformation tools, product ownership, roadmaps, backlogs, stakeholder priorities, technical requirements, cross-functional teams, data science, data engineering, infrastructure, schema design, pipeline and territory optimization, communication skills, technical and business audiences, stakeholder discovery sessions, present enrichment strategy and impact to leadership, ai-powered enrichment, llms, prompt-based enrichment, emerging companies, startups, hard-to-enrich segments, data quality, mdm frameworks, routing and scoring tools, marketing automation platforms, startup signal detection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5127289008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a98d4ace-d27</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior Data Engineer to join our high-performing data enablement team. As a Senior Data Engineer, you will play a pivotal role within the Data team that powers Yuno and its payment platform, while helping co-design and implement an architecture that scales with the product and the company.</p>
<p>The stack is modern: StarRocks as our primary analytical layer, Flink for processing, DBT for transformation, Airflow for orchestration and various tooling for surfacing insights.</p>
<p>You&#39;ll be working on things that matter and are technically interesting:</p>
<ul>
<li><p>Design and build data pipelines for large volumes of payment data that are performant, reliable, and correct , not just fast.</p>
</li>
<li><p>Own end-to-end data flows: from ingestion and transformation through to the outputs that Finance, Product, and clients depend on.</p>
</li>
<li><p>Drive data quality across your domain with tooling.</p>
</li>
<li><p>Work cross-functionally with Product, Finance and enable other Engineering teams via a &#39;consulting&#39; style model.</p>
</li>
<li><p>Contribute to how the team works , code review culture, CI/CD standards, ADRs, how we handle incidents , we&#39;re building these practices now and senior engineers shape them.</p>
</li>
<li><p>Help onboard and level up engineers around you; there&#39;s real opportunity to make an impact here.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Proven proactivity, technical acumen and the ability to lead initiatives and deliver projects., Experience in defining and evolving data engineering standards, architectural guidelines and governance, ideally within a regulated environment., Strong Python and SQL skills., Hands-on experience with Spark or Flink in production., DBT for data transformation., Airflow for orchestration., Experience with Apache Hudi., Experience with financial, transactional, or payment data.</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Yuno</Employername>
      <Employerlogo>https://logos.yubhub.co/yuno.com.png</Employerlogo>
      <Employerdescription>Yuno builds the payment infrastructure that allows all companies to participate in the global market, providing access to leading payment capabilities.</Employerdescription>
      <Employerwebsite>https://www.yuno.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/yuno/dc30ae7b-9c0f-426f-ae77-c58d9e4f6d6d</Applyto>
      <Location>Europe</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1dc3f3a4-ced</externalid>
      <Title>Senior Integrations Engineer</Title>
      <Description><![CDATA[<p>At Eve, we&#39;re redefining what&#39;s possible in legal technology. Our mission is to empower plaintiff law firms with AI-driven solutions that elevate how they operate, serve clients, and grow.</p>
<p>We believe the future of law will be built by &#39;AI-Native Law Firms&#39; , firms that are managed, scaled, and optimized by intelligent systems rather than manual processes and endless administrative work. Eve&#39;s technology augments the capabilities of attorneys across every stage of a case , from intake and document review to strategy and settlement , so they can focus on what truly matters: achieving the best outcomes for their clients.</p>
<p><strong>Why Join Eve:</strong></p>
<p>Product-market fit: Eve is used by over 550+ law firms, and we&#39;re growing fast. Backed by top investors: We&#39;ve raised over $160M from world-class partners including Spark Capital, Andreessen Horowitz(A16z), Menlo Ventures, and Lightspeed. Built by a world-class team: Engineers, designers, and operators from places like Scale, Meta, Airbnb, Cruise, Square, Rubrik, and Lyft are building Eve from the ground up. AI-Native from day one: We&#39;re on the bleeding edge of AI, collaborating directly with teams at OpenAI and Anthropic to build best-in-class AI workflows tailored for legal work. Explosive growth: We are growing 2X revenue Quarter over Quarter. Our revenue model is consumption-based (per legal matter), which means our success is directly tied to how deeply customers adopt the product. That starts with integration. Customers with connected CMS integrations retain and expand at significantly higher rates, and nearly every account that has expanded started with a completed integration.</p>
<p><strong>Why This Role:</strong></p>
<p>You&#39;ll own the end-to-end integration experience for Eve&#39;s customers, from contract signed to integration live and usable. Eve integrates with multiple major case management platforms, each with its own API patterns, authentication models, deployment quirks, and edge cases. Some are cloud-native with clean REST APIs. Others are self-hosted on-prem systems behind firewalls that require weeks of configuration. Your job is to figure out the right integration path for each customer and drive it to completion. This is not a project management role, but it&#39;s not a pure engineering role either. You&#39;ll split your time between hands-on technical work (configuration, validation, debugging, scripting) and customer-facing delivery (implementation planning, expectation setting, vendor coordination, internal handoffs).</p>
<p><strong>What You Will Accomplish:</strong></p>
<p>Own integration delivery for customers from post-sale through go-live: scoping the integration path, validating configuration (auth setup, tenant and environment validation, allowlisting, callback/redirect setup, granular permissions, document store configuration), testing in sandbox and live environments, and driving to completion Scope integration approaches for new CMS partners, prototype solutions, and deliver technical requirements to Engineering for productization Manage strategic vendor relationships: pressuring vendors for API access, proving whether issues are on their side or ours, identifying unsupported or undocumented API behavior, and negotiating workarounds when standard paths don’t exist Debug and resolve production integration issues by tracing through logs, isolating the layer (our side, vendor side, or customer config), and resolving or routing to Engineering with a clear diagnosis Build the assets required to scale integration delivery: implementation playbooks, vendor-specific setup guides, troubleshooting matrices, customer readiness checklists, known limitation documentation, escalation criteria, and handoff notes for the CS team Identify patterns across customer integrations and feed productization requirements back to Engineering, without duplicating the platform Engineering owns</p>
<p><strong>What We Are Looking For:</strong></p>
<p>3-5+ years building, configuring, and debugging production API integrations (REST APIs, pagination, rate limiting, retry logic) Experience owning implementation delivery end-to-end, not just building integrations but driving them to completion with customers Proficiency in Python or Node.js (Python preferred) for scripting, data transformation, and API orchestration Hands-on experience with OAuth 2.0, JWT, token refresh cycles, and permission/scoping troubleshooting Comfortable working with webhooks and debugging common failure modes (expiry, missed events, configuration drift) Experience mapping data across systems that represent the same concepts differently Strong communication skills for customer calls, vendor negotiations, and internal coordination Interest in legal technology and how law firms operate</p>
<p><strong>You’ll Thrive in This Role If You Have:</strong></p>
<p>Direct experience with legal tech platforms (Clio, Filevine, SmartAdvocate, Litify) iPaaS or integration platform experience (Workato, Celigo, Mulesoft, Tray.io) A track record of creating implementation documentation, runbooks, or process guides that other teams use SQL familiarity and ability to reason about how integration data flows into a data warehouse On-prem and hybrid deployment experience with self-hosted systems, firewall rules, and VPNs Multi-tenant integration management across hundreds of customer accounts Experience building observability and alerting for integrations at scale</p>
<p><strong>Additional Information:</strong></p>
<p>Benefits: Competitive Salary &amp; Equity 401(k) Program with Employer Matching Health, Dental, Vision and Life Insurance Short Term and Long Term Disability Autonomous Work Environment Office Setup Reimbursement Flexible Time Off (FTO) + Holidays Quarterly Team Gatherings</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>API integrations, Python, Node.js, OAuth 2.0, JWT, token refresh cycles, permission/scoping troubleshooting, webhooks, debugging common failure modes, data transformation, API orchestration, legal tech platforms, iPaaS or integration platform, implementation documentation, runbooks, process guides, SQL, on-prem and hybrid deployment, multi-tenant integration management, observability and alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Eve</Employername>
      <Employerlogo>https://logos.yubhub.co/eve.com.png</Employerlogo>
      <Employerdescription>Eve provides AI-driven solutions for plaintiff law firms. It has raised over $160M from top investors and is used by over 550+ law firms.</Employerdescription>
      <Employerwebsite>https://eve.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/Eve/8254c2d0-b201-4eba-9e87-4b353fce2e8d</Applyto>
      <Location>US</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>be821069-a7f</externalid>
      <Title>Asset Data Engineer</Title>
      <Description><![CDATA[<p>Join the Asset Data team and build the streaming data infrastructure that powers Anchorage&#39;s digital asset platform. You&#39;ll design systems that ingest real-time blockchain and market data from diverse providers, transforming raw feeds into certified, trusted data products.</p>
<p>We&#39;re creating contract-governed supply chains that let us onboard new assets and providers quickly while maintaining the low-latency, high-availability SLOs our business depends on.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build streaming data pipelines for blockchain data (onchain transactions, staking rewards, validator info) and market data (prices, trades, order books)</li>
<li>Design and implement data contracts and validation gates that enforce quality and schema compliance at ingestion points</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Collaborate on designing the architecture for standardized ingestion patterns that enable rapid onboarding of new blockchains and market data feeds</li>
<li>Establish redundancy and failover patterns to meet Tier 1 availability and freshness SLOs for critical data products</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Collaborate with Protocols, Trading, and Custody teams to understand their data needs and design certified data products with clear SLAs</li>
<li>Partner with Data Platform team on orchestration, storage patterns (BigLake), and metadata management (Atlan)</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Advocate for contract-governed data supply chains and help establish engineering standards for producer patterns across the org</li>
<li>Contribute to architectural decisions and help mature the team&#39;s practices around observability, testing, and operational excellence</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>5-7+ years building streaming or high-throughput data systems: You have experience designing and operating production data pipelines that handle large volumes with low latency and high reliability</li>
<li>Solid backend engineering skills: You&#39;re proficient in Go or Python and have built services that interact with streaming infrastructure (Kafka, pub/sub, websockets, REST APIs)</li>
<li>Blockchain data familiarity: You understand blockchain concepts and are comfortable working with on-chain data (transactions, events, staking, validators) across multiple chains with different data models</li>
<li>Data engineering adjacent skills: You&#39;re comfortable with data transformation patterns, schema evolution, and working with cloud data warehouses (BigQuery) and storage systems (GCS, BigLake)</li>
<li>Operational mindset: You have experience deploying and operating services on cloud platforms (preferably GCP), with strong practices around monitoring, alerting, and incident response</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Staking data expertise: You&#39;ve worked with staking rewards, validator data, or proof-of-stake blockchain infrastructure</li>
<li>Market data systems: You&#39;ve built systems that ingest and process market data (prices, trades, order books) from exchanges or data vendors</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Kafka, pub/sub, websockets, REST APIs, blockchain data, data transformation patterns, schema evolution, cloud data warehouses, storage systems, stake data expertise, market data systems, infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/82139746-fb0e-44b9-bbb6-ae078e5d251a</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>0b8499f8-a97</externalid>
      <Title>Member of Compliance, Financial Crimes Compliance Data Analytics</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are seeking a highly motivated and intellectually curious Member of Compliance, Financial Crimes Compliance Data Analytics with a strong data analysis background.</p>
<p>As a vital member of the Compliance team, you will have the opportunity to support the design, implementation, and optimization of compliance programs across all applicable Anchorage Digital legal entities.</p>
<p>You will work closely with various compliance functions, particularly Financial Crimes Compliance, to drive efficiency and effectiveness within the program.</p>
<p>Your expertise will be critical in transforming raw data into actionable insights, driving process improvements, and leveraging technology to enhance our overall compliance posture.</p>
<p>This role is ideal for a proactive and &#39;young and hungry&#39; technologist who thrives on solving complex problems in a dynamic regulatory environment.</p>
<p>You&#39;ll gain deep exposure to diverse compliance domains and have the chance to apply your data expertise to strengthen Anchorage Digital&#39;s global compliance function through analytics, automation, and the strategic use of AI tools.</p>
<p>It is important that you are well-organized, have a strong analytical background, can effectively manage competing priorities, and can adapt to rapid change in a fast-paced environment.</p>
<p>If you thrive under uncertainty and are motivated to excel in a dynamic environment with competing priorities, this role is designed for you.</p>
<p>Anchorage Digital values individuals who are proactive, detail-oriented, and innovative.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Expert in Compliance-related data tables and models, understanding the nuances of the data, the underlying codes, and the limitations of the models.</li>
</ul>
<ul>
<li>Work with stakeholders to drive the automation of key compliance processes and workflows using internal tools, such as Know-Your-Customer, sanctions screening, suspicious activity identification and reporting to improve efficiency and reduce manual effort.</li>
</ul>
<ul>
<li>Experience experimenting with and deploying AI solutions.</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Support the development and enhancement of BSA/AML models, including transaction monitoring, sanctions screening, customer risk rating, blockchain analytics, and other relevant BSA/AML tools.</li>
</ul>
<ul>
<li>Contribute to process improvements and best practices for the FCC Analytics team, including code review, testing frameworks, project management, and documentation.</li>
</ul>
<ul>
<li>Conduct ad hoc analyses and respond to time-sensitive data requests from auditors, regulators, or other Compliance teams with accuracy and speed.</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Develop deep understanding of Anchorage Digital&#39;s business model across custody, staking, stablecoins, and trading, and how data flows through these domains and the compliance models.</li>
</ul>
<ul>
<li>Coordinate with cross-functional teams (Product, Engineering, etc.) to implement and improve tools and processes within the broader Compliance department.</li>
</ul>
<ul>
<li>Understand how the Compliance and FCC team fits within the broader organizational structure, align work with team and company priorities.</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Manage competing priorities across strategic projects and urgent ad-hoc requests; ask clarifying questions to scope ambiguous requests and push back constructively when requirements are unclear, proposing alternative approaches.</li>
</ul>
<ul>
<li>Present findings and data insights with appropriate context, visual aids, and tailored communication style</li>
</ul>
<p><strong>You may be a fit for this role if you have:</strong></p>
<ul>
<li>Experience: 2–3 years of experience in a data analytics or data science role, with a strong understanding of blockchain, cryptocurrency, and the financial services industry.</li>
</ul>
<ul>
<li>Technical Proficiency: Demonstrated ability to operate autonomously to manage multiple competing priorities.</li>
</ul>
<ul>
<li>Automation &amp; AI Aptitude: Experience with or a strong interest in automation principles, leveraging existing AI tools, and exploring new AI tools (such as AI Agents) to enhance productivity.</li>
</ul>
<ul>
<li>Technologist Mindset: While not necessarily an engineer, a strong understanding of how systems, data flows, and technologies interact is essential.</li>
</ul>
<ul>
<li>Eagerness to learn and apply new technologies.</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>Prior experience in the financial service industry, crypto industry, start-up, or a fast-paced, evolving environment where you&#39;ve worn multiple hats and adapted quickly;</li>
</ul>
<ul>
<li>General understanding of regulatory compliance, financial crimes, and risk management.</li>
</ul>
<ul>
<li>Experience with data transformation tools like dbt or similar; familiarity with version control (Git) and software engineering best practices for analytics</li>
</ul>
<ul>
<li>Python or other scripting languages for data analysis, automation, or data pipeline work</li>
</ul>
<ul>
<li>Excellent communication skills with experience presenting technical findings to both technical and non-technical stakeholders</li>
</ul>
<ul>
<li>Proven ability to manage competing priorities and deliver accurate results under tight deadlines, especially for compliance or audit-related requests</li>
</ul>
<ul>
<li>Detail-oriented and quality-focused with a commitment to data accuracy, testing, and documentation</li>
</ul>
<ul>
<li>You were emotionally moved by the soundtrack to Hamilton, which chronicles the founding of a new financial system.</li>
</ul>
<p><strong>Additional Information About Anchorage Digital:</strong></p>
<p>Who we are</p>
<p>The Anchorage Village, what we call our team, brings together the brightest minds from platform security, financial services, and distributed ledger technology.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Compliance-related data tables and models, Know-Your-Customer, sanctions screening, suspicious activity identification and reporting, BSA/AML models, transaction monitoring, customer risk rating, blockchain analytics, data transformation tools, version control, software engineering best practices, Python, scripting languages, data analysis, automation, data pipeline work</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/7a3e7cce-a01b-419a-a3f1-753466ae8bf3</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>107b978a-791</externalid>
      <Title>Data - Regulatory Reporting, Luxembourg</Title>
      <Description><![CDATA[<p>We are seeking a Data and Regulatory Reporting professional to build and operate our regulatory reporting production line, including robust data sourcing, reconciliation, data quality controls, and submission governance.</p>
<p>The role partners closely with our Data Officer based in San Francisco and the regulatory reporting team based in Ireland. The tight collaboration will ensure timely, accurate, and auditable reporting to the CSSF and efficient responses to supervisory requests.</p>
<p><strong>Regulatory Reporting Production and Controls</strong></p>
<ul>
<li>Own end-to-end production of periodic regulatory and prudential reporting for Bridge Building S.A. (CSSF periodic reports, BCL reports, CRS, CARF and CESOP).</li>
<li>Implement and operate first-line controls to ensure data completeness, data accuracy, and data timeliness, including reconciliations to the general ledger, sub-ledgers and Bridge Building S.A.&#39;s platform.</li>
</ul>
<p><strong>Data Governance and Data Quality</strong></p>
<ul>
<li>Define reporting data models, lineage, and data dictionaries; maintain consistent definitions for key regulatory metrics.</li>
<li>Implement data quality rules, exception management, and remediation workflows in collaboration with data owners.</li>
</ul>
<p><strong>Submission Governance and Regulator Readiness</strong></p>
<ul>
<li>Maintain submission procedures, access controls, and audit trails for regulatory transmissions and supervisory requests.</li>
<li>Support responses to CSSF questions, reporting-related inspections, and remediation plans where issues are identified.</li>
</ul>
<p><strong>Automation and Scalability</strong></p>
<ul>
<li>Drive automation of reporting workflows (e.g. SQL/ETL/BI) to reduce manual effort and operational risk.</li>
<li>Maintain clear documentation, runbooks, and operational KPIs for reporting processes.</li>
</ul>
<p><strong>Key Requirements</strong></p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Finance, Data/Analytics, Statistics, Engineering, or a related field.</li>
<li>3-5 years in regulatory reporting and/or regulatory data management within regulated financial services (payments/EMI/PI, fintech, bank, or advisory).</li>
<li>Proven experience implementing reconciliations, data controls, and reporting governance with strong audit discipline.</li>
</ul>
<p><strong>Skills</strong></p>
<ul>
<li>Strong SQL and data transformation skills; ability to translate regulatory requirements into data specifications. Live SQL assessment will be part of the interview.</li>
<li>Strong controls mindset, attention to detail, and capability to manage deadlines and stakeholders.</li>
</ul>
<p><strong>Languages</strong></p>
<ul>
<li>Fluent English required</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data transformation, regulatory reporting, data governance, data quality</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Bridge Building S.A.</Employername>
      <Employerlogo>https://logos.yubhub.co/bridgebuildingsa.com.png</Employerlogo>
      <Employerdescription>Bridge Building S.A. is a Luxembourg-based company building a regulated EMI and CASP.</Employerdescription>
      <Employerwebsite>https://www.bridgebuildingsa.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7587256</Applyto>
      <Location>Luxembourg</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>65e7bd92-c31</externalid>
      <Title>FBS Analytics Engineer</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. By combining international reach with US expertise, we build diverse and high-performing teams that are equipped to thrive in today’s competitive marketplace.</p>
<p>We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>Since we don’t have a local legal entity, we’ve partnered with Capgemini, which acts as the Employer of Record. Capgemini is responsible for managing local payroll and benefits.</p>
<p><strong>What to expect on your journey with us:</strong></p>
<ul>
<li>A solid and innovative company with a strong market presence</li>
</ul>
<ul>
<li>A dynamic, diverse, and multicultural work environment</li>
</ul>
<ul>
<li>Leaders with deep market knowledge and strategic vision</li>
</ul>
<ul>
<li>Continuous learning and development</li>
</ul>
<p><strong>Team Function</strong> The Direct modeling team is focused on creating models to guide enterprise marketing decision that will help to promote brand awareness as well as boost sales through direct channel.</p>
<p><strong>Role Description:</strong></p>
<p>This position plays a crucial role in the data ecosystem by iteratively transforming raw data into structured, high-quality datasets that are ready for analysis in partnership with data/decision scientists. The role primarily focuses on moderately complex business problems while receiving limited coaching and guidance from data leadership. The role combines the technical skills of a data engineer, the analytical mindset of a data analyst, and strong business acumen to ensure data is not only collected and stored efficiently but also made accessible and insightful for end users. In partnership with data/decision scientists, the position is responsible for end-to-end data workflow including data ingestion, transformation, modeling, and validation to enable data-driven decision-making across the organization. This position requires deep understanding of data engineering, business processes, and analytics principles as well as a proactive approach to solving complex data challenges.</p>
<p><strong>Essential Job Functions:</strong></p>
<p><strong>1) Data infrastructure development</strong>: Pipeline Design and Development; Architects and builds scalable data pipelines using modern ETL (Extract, Load, Transform) tools and frameworks such as dbt (Data Build Tool), Apache Airflow, or similar. Automates data ingestion processes from various sources including databases, APIs, and third party services. Data Storage and Management - Designs and implements data warehousing solutions using platforms like Snowflake, Redshift, or BigQuery. Optimizes storage solutions for performance, cost efficiency, and scalability.</p>
<p><strong>2) Data modeling and transformation:</strong> Data Modeling - Develops and maintains logical and physical data models to support business analytics. Creates and manages dimensional models, star/snowflake schemas, and other data structures. Data Transformation - Transforms raw data into clean, organized, and analytics-ready datasets using SQL, Python, or other relevant languages. Implements data transformation workflows to handle data cleansing, normalization, and enrichment. Data Quality Assurance - Conducts data validation and consistency checks to ensure the accuracy and reliability of data. Implements data quality monitoring and alerting mechanisms.</p>
<p><strong>3) Collaboration and stakeholder management:</strong> Cross-Functional Collaboration - Works closely with data analysts, data scientists, and business stakeholders to gather requirements and understand their data needs. Acts as a liaison between technical teams and business units to translate business requirements into technical specifications. Technical Communication - Clearly communicates complex technical concepts and data insights to non-technical stakeholders. Provides training and support to team members on data tools, best practices, and methodologies.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Over 4 years of experience in data development and analytics engineering using Python, SQL, DBT and Snowflake.</li>
</ul>
<ul>
<li>Bachelor’s degree in Computer Science, Data Science, Engineering or other Math or Technology related degrees.</li>
</ul>
<ul>
<li>Fluency in English</li>
</ul>
<p><strong>Software / Tools</strong></p>
<ul>
<li>SQL (must have)</li>
</ul>
<ul>
<li>Python (must have)</li>
</ul>
<ul>
<li>Snowflake (must have)</li>
</ul>
<ul>
<li>DBT (must have)</li>
</ul>
<p><strong>Other Critical Skills</strong></p>
<ul>
<li>Data Transformation</li>
</ul>
<ul>
<li>Data Quality Assurance</li>
</ul>
<ul>
<li>Pipeline Design and Development</li>
</ul>
<ul>
<li>Technical Communication</li>
</ul>
<ul>
<li>Independent work</li>
</ul>
<ul>
<li>Orientation to detail</li>
</ul>
<p><strong>Benefits</strong></p>
<p>This position comes with a competitive compensation and benefits package.</p>
<ul>
<li>A competitive salary and performance-based bonuses.</li>
</ul>
<ul>
<li>Comprehensive benefits package.</li>
</ul>
<ul>
<li>Flexible work arrangements (remote and/or office-based).</li>
</ul>
<ul>
<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>
</ul>
<ul>
<li>Private Health Insurance.</li>
</ul>
<ul>
<li>Paid Time Off.</li>
</ul>
<ul>
<li>Training &amp; Development opportunities in partnership with renowned companies.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Snowflake, DBT, Data Transformation, Data Quality Assurance, Pipeline Design and Development, Technical Communication, Independent work, Orientation to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company that provides a range of services including technology consulting, application services, and business process outsourcing.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/ws76jLTZQ1JKbCcs3CUiC4/remote-fbs-analytics-engineer-in-brazil-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>5aabf454-ae0</externalid>
      <Title>FBS Analytics Engineer</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. By combining international reach with US expertise, we build diverse and high-performing teams that are equipped to thrive in today’s competitive marketplace.</p>
<p>We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>Since we don’t have a local legal entity, we’ve partnered with Capgemini, which acts as the Employer of Record. Capgemini is responsible for managing local payroll and benefits.</p>
<p><strong>What to expect on your journey with us:</strong></p>
<ul>
<li>A solid and innovative company with a strong market presence</li>
<li>A dynamic, diverse, and multicultural work environment</li>
<li>Leaders with deep market knowledge and strategic vision</li>
<li>Continuous learning and development</li>
</ul>
<p><strong>Team Function</strong></p>
<p>The Direct modeling team is focused on creating models to guide enterprise marketing decision that will help to promote brand awareness as well as boost sales through direct channel.</p>
<p><strong>Role Description:</strong></p>
<p>This position plays a crucial role in the data ecosystem by iteratively transforming raw data into structured, high-quality datasets that are ready for analysis in partnership with data/decision scientists. The role primarily focuses on moderately complex business problems while receiving limited coaching and guidance from data leadership. The role combines the technical skills of a data engineer, the analytical mindset of a data analyst, and strong business acumen to ensure data is not only collected and stored efficiently but also made accessible and insightful for end users. In partnership with data/decision scientists, the position is responsible for end-to-end data workflow including data ingestion, transformation, modeling, and validation to enable data-driven decision-making across the organization. This position requires deep understanding of data engineering, business processes, and analytics principles as well as a proactive approach to solving complex data challenges.</p>
<p><strong>Essential Job Functions:</strong></p>
<p><strong>1) Data infrastructure development</strong>: Pipeline Design and Development; Architects and builds scalable data pipelines using modern ETL (Extract, Load, Transform) tools and frameworks such as dbt (Data Build Tool), Apache Airflow, or similar. Automates data ingestion processes from various sources including databases, APIs, and third party services. Data Storage and Management - Designs and implements data warehousing solutions using platforms like Snowflake, Redshift, or BigQuery. Optimizes storage solutions for performance, cost efficiency, and scalability.</p>
<p><strong>2) Data modeling and transformation:</strong> Data Modeling - Develops and maintains logical and physical data models to support business analytics. Creates and manages dimensional models, star/snowflake schemas, and other data structures. Data Transformation - Transforms raw data into clean, organized, and analytics-ready datasets using SQL, Python, or other relevant languages. Implements data transformation workflows to handle data cleansing, normalization, and enrichment. Data Quality Assurance - Conducts data validation and consistency checks to ensure the accuracy and reliability of data. Implements data quality monitoring and alerting mechanisms.</p>
<p><strong>3) Collaboration and stakeholder management:</strong> Cross-Functional Collaboration - Works closely with data analysts, data scientists, and business stakeholders to gather requirements and understand their data needs. Acts as a liaison between technical teams and business units to translate business requirements into technical specifications. Technical Communication - Clearly communicates complex technical concepts and data insights to non-technical stakeholders. Provides training and support to team members on data tools, best practices, and methodologies.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Over 4 years of experience in data development and analytics engineering using Python, SQL, DBT and Snowflake.</li>
<li>Bachelor’s degree in Computer Science, Data Science, Engineering or other Math or Technology related degrees.</li>
<li>Fluency in English</li>
</ul>
<p><strong>Software / Tools</strong></p>
<ul>
<li>SQL (must have)</li>
<li>Python (must have)</li>
<li>Snowflake (must have)</li>
<li>DBT (must have)</li>
</ul>
<p><strong>Other Critical Skills</strong></p>
<ul>
<li>Data Transformation</li>
<li>Data Quality Assurance</li>
<li>Pipeline Design and Development</li>
<li>Technical Communication</li>
<li>Independent work</li>
<li>Orientation to detail</li>
</ul>
<p><strong>Benefits</strong></p>
<p>This position comes with a competitive compensation and benefits package.</p>
<ul>
<li>A competitive salary and performance-based bonuses.</li>
<li>Comprehensive benefits package.</li>
<li>Flexible work arrangements (remote and/or office-based).</li>
<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>
<li>Private Health Insurance.</li>
<li>Paid Time Off.</li>
<li>Training &amp; Development opportunities in partnership with renowned companies.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Snowflake, DBT, Data Transformation, Data Quality Assurance, Pipeline Design and Development, Technical Communication, Independent work, Orientation to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with a diverse collective of nearly 350,000 strategic and technological experts across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/htNwC3gPnBQ9oxedafiBav/remote-fbs-analytics-engineer-in-mexico-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>2a56a653-c18</externalid>
      <Title>Palantir Engineer Specialist - Sr. Consultant - Principal</Title>
      <Description><![CDATA[<p><strong>Palantir Engineer Specialist</strong></p>
<p><strong>Sr. Consultant - Principal</strong></p>
<p><strong>London</strong></p>
<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organisation allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>
<p><strong>About Your Role</strong></p>
<p>As a <strong>Senior Consultant / Principal Consultant – Palantir Engineer</strong>, you lead and deliver end-to-end, data-driven solutions using <strong>Palantir Foundry</strong> in complex client environments. You operate at the intersection of engineering, data, and consulting, working closely with business and technical stakeholders to translate complex problems into scalable, production-ready solutions. You combine strong hands-on technical skills with a consulting mindset, taking ownership of solution design, implementation, and adoption across organisations.</p>
<p><strong>Your role will include:</strong></p>
<ul>
<li>Own the <strong>end-to-end delivery</strong> of Palantir Foundry–based solutions, from problem definition to production</li>
<li>Design and implement <strong>data pipelines and transformations</strong> across diverse data sources</li>
<li>Model data using <strong>Foundry Ontology</strong> concepts to support analytics and operational use cases</li>
<li>Build scalable, reliable solutions using <strong>Python, SQL, and PySpark</strong> within Foundry</li>
<li>Collaborate closely with business stakeholders to define requirements, success metrics, and roadmaps</li>
<li>Support <strong>prototyping, productionisation, and scaling</strong> of data-driven applications</li>
<li>Ensure solutions meet requirements for <strong>data quality, governance, security, and performance</strong></li>
<li>Act as a technical advisor within project teams and contribute to best practices</li>
</ul>
<p><strong>Requirements</strong></p>
<p><strong>What you bring – required</strong></p>
<p><strong>Experience &amp; Seniority</strong></p>
<ul>
<li>Proven experience as a <strong>Senior Consultant or Principal Consultant</strong> in data, analytics, or platform engineering</li>
<li>Strong experience delivering <strong>client-facing data solutions</strong> in complex environments</li>
<li>Ability to take ownership and work independently in ambiguous problem spaces</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong programming skills in <strong>Python</strong> and <strong>SQL</strong>; <strong>PySpark</strong> experience required</li>
<li>Hands-on experience with <strong>Palantir Foundry</strong>, including:</li>
<li>Pipeline Builder / Code Workbook</li>
<li>Data integration and transformation</li>
<li>Ontology modelling and data lineage</li>
<li>Solid understanding of <strong>data architectures</strong>, including data lakes, lakehouses, and data warehouses</li>
<li>Experience working with APIs, databases, and structured / semi-structured data</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience building <strong>scalable ETL/ELT pipelines</strong></li>
<li>Familiarity with <strong>CI/CD concepts</strong>, testing, and production deployments</li>
<li>Strong focus on <strong>solution quality, maintainability, and performance</strong></li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field <strong>or equivalent practical experience</strong></li>
</ul>
<p><strong>Nice to have</strong></p>
<ul>
<li>Experience with <strong>cloud platforms</strong> (AWS, Azure, GCP)</li>
<li>Familiarity with <strong>containerisation</strong> (Docker, Kubernetes)</li>
<li>Prior experience as a <strong>Palantir FDE</strong> or in Foundry-heavy delivery roles</li>
<li>Domain experience in industries such as <strong>Energy, Finance, Public Sector, Healthcare, or Logistics</strong></li>
</ul>
<p><strong>Benefits</strong></p>
<p><strong>About your team</strong></p>
<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognised as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognised by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, PySpark, Palantir Foundry, Pipeline Builder, Code Workbook, Data integration, Data transformation, Ontology modelling, Data lineage, Data architectures, Data lakes, Lakehouses, Data warehouses, APIs, Databases, Structured data, Semi-structured data, ETL/ELT pipelines, CI/CD concepts, Testing, Production deployments, Solution quality, Maintainability, Performance, Bachelor’s degree, Master’s degree, Computer Science, Engineering, Mathematics, Cloud platforms, Containerisation, Palantir FDE, Foundry-heavy delivery roles, Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. The company is a mid-size player within the scale of Infosys, a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/2A8U1ryerVijb4fFAc6i8u/hybrid-palantir-engineer-specialist---sr.-consultant---principal-in-london-at-infosys-consulting---europe</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>8b0e9386-fa9</externalid>
      <Title>Data Engineering &amp; Data Science Consultant</Title>
      <Description><![CDATA[<p><strong>Data Engineering &amp; Data Science Consultant</strong></p>
<p>You will work hands-on on the design, build, and operationalisation of modern data and analytics solutions. You will contribute across the full lifecycle – from data ingestion and transformation to analytics, machine learning, and production deployment. You will collaborate closely with data engineers, architects, data scientists, and business stakeholders to deliver scalable, reliable, and value-driven data solutions in complex client environments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Apply data science and machine learning techniques to real-world business problems</li>
<li>Work with structured and semi-structured data in data lakes, lakehouses, and data warehouses</li>
<li>Develop and optimise data transformations for analytical and machine learning workloads</li>
<li>Support the productionisation of data and ML solutions, including monitoring and optimisation</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3–5 years of experience in data engineering, data science, or analytics</li>
<li>Hands-on experience delivering data and analytics solutions in project-based or client environments</li>
<li>Strong problem-solving skills and a pragmatic, delivery-oriented mindset</li>
</ul>
<p><strong>Data Engineering Foundations</strong></p>
<ul>
<li>Experience building end-to-end data pipelines (ingestion, transformation, storage)</li>
<li>Solid understanding of data modelling, data transformations, and feature engineering</li>
<li>Familiarity with cloud-based data platforms, such as Azure, AWS, or GCP</li>
</ul>
<p><strong>Applied Data Science &amp; Analytics</strong></p>
<ul>
<li>Experience applying statistical analysis and machine learning techniques</li>
<li>Strong programming skills in Python</li>
<li>Very good SQL skills and experience working with relational databases</li>
</ul>
<p><strong>Nice to have</strong></p>
<ul>
<li>Experience with streaming technologies (e.g. Kafka, Azure Event Hubs)</li>
<li>Exposure to GenAI, NLP, time series, or advanced analytics use cases</li>
<li>Experience with NoSQL databases (e.g. MongoDB, Cosmos DB)</li>
</ul>
<p><strong>Language &amp; Mobility</strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data science, machine learning, data engineering, cloud-based data platforms, data modelling, data transformations, feature engineering, Python, SQL, relational databases, streaming technologies, GenAI, NLP, time series, advanced analytics, NoSQL databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. The company is a mid-size player within the scale of Infosys, a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/43f8dm12rcrpZUsa228TbZ/data-engineering-%26-data-science-consultant-in-london-at-infosys-consulting---europe</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ba5e5f71-701</externalid>
      <Title>FBS Associate Analytics Engineer</Title>
      <Description><![CDATA[<p>FBS Associate Analytics Engineer</p>
<p>We are seeking an FBS Associate Analytics Engineer to join our team. As an FBS Associate Analytics Engineer, you will play a key role in transforming raw data into structured, high-quality datasets that are ready for analysis. You will work on low to moderately complex business problems, receiving coaching and guidance from data leadership. Your primary focus will be on end-to-end data workflow, including data ingestion, transformation, modeling, and validation to enable data-driven decision-making across the organization.</p>
<p>Responsibilities</p>
<ul>
<li>Emerging data infrastructure development with coaching and guidance: Pipeline Design and Development – Architects and builds scalable data pipelines using modern ETL (Extract, Load, Transform) tools and frameworks such as DBT (Data Build Tool), Apache Airflow, or similar.</li>
<li>Automates data ingestion processes from various sources including databases, APIs, and third party services.</li>
<li>Data Storage and Management - Designs and implements data warehousing solutions using platforms like Snowflake, Redshift, or BigQuery.</li>
<li>Optimizes storage solutions for performance, cost efficiency, and scalability.</li>
<li>Data Modeling - Develops and maintains logical and physical data models to support business analytics.</li>
<li>Creates and manages dimensional models, star/snowflake schemas, and other data structures.</li>
<li>Data Transformation - Transforms raw data into clean, organized, and analytics-ready datasets using SQL, Python, or other relevant languages.</li>
<li>Data Quality Assurance - Conducts data validation and consistency checks to ensure the accuracy and reliability of data.</li>
<li>Technology Stack - Utilizes modern data tools and technologies such as SQL, Python, dbt, Airflow, and cloud platforms like AWS, Azure, or GCP.</li>
<li>Continuous Learning – Stays updated with the latest trends, best practices, and advancements in data engineering and analytics.</li>
<li>Participates in professional development opportunities to enhance technical and analytical skills.</li>
<li>Provides code as requirements for hardening and operationalization by technology with significant coaching, guidance, and feedback.</li>
<li>Performs other duties as assigned.</li>
</ul>
<p>Requirements</p>
<ul>
<li>1+ year of experience working on a Data Environment</li>
<li>Good Analytics mindset</li>
<li>Knowledge in SQL</li>
<li>Strong verbal communication and listening skills.</li>
<li>Demonstrated written communication skills.</li>
<li>Demonstrated analytical skills.</li>
<li>Demonstrated problem solving skills.</li>
<li>Effective interpersonal skills.</li>
<li>Seeks to acquire knowledge in area of specialty.</li>
<li>Possesses strong technical aptitude. Basic experience with SQL or similar, dimensional modeling, pipeline orchestration, building data pipelines to transform data, and BI visualizations.</li>
<li>Python experience is a plus</li>
</ul>
<p>Benefits</p>
<p>This position comes with a competitive compensation and benefits package.</p>
<ul>
<li>A competitive salary and performance-based bonuses.</li>
<li>Comprehensive benefits package.</li>
<li>Flexible work arrangements (remote and/or office-based).</li>
<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>
<li>Private Health Insurance.</li>
<li>Paid Time Off.</li>
<li>Training &amp; Development opportunities in partnership with renowned companies.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, DBT, Apache Airflow, Snowflake, Redshift, BigQuery, Data Modeling, Data Transformation, Data Quality Assurance, Cloud Platforms, Python experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with nearly 350,000 employees across 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/jaxxjRWH9XxkRbr1TCrPb5/remote-fbs-associate-analytics-engineer-in-mexico-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>bc13ad37-8f9</externalid>
      <Title>ACT Integration Solutions Specialist – Aladdin Platform, Associate</Title>
      <Description><![CDATA[<p>About this role</p>
<p>We are seeking a dynamic Specialist with a proven track record in data integration and data conversion to join our growing Integration Solutions Practice. This role is pivotal in scaling our projects across various asset classes and front-to-back Aladdin services, employing Aladdin’s Studio platform and its robust capabilities in data integration and onboarding – which include file-based mechanisms, APIs, and the Aladdin Data Cloud. This is a client-facing position offering an opportunity to collaborate with a broad range of internal stakeholders at BlackRock.</p>
<p>Main Function</p>
<p>As an Integration Solutions Specialist for the Aladdin Platform, you will play a crucial role on Aladdin client projects in orchestrating specific technology channels that cover client activities related to data integration and data conversion or onboarding. You will be responsible for driving our project channels, providing best practice guidance to our clients’ technology teams, and supporting those teams as they build their integration and transition their data to our platform. You will work closely with the core Aladdin implementation teams to deliver on client commitments. Your expertise will be instrumental in securing an on-time and on-budget project outcome.</p>
<p>Responsibilities</p>
<p>Your key responsibilities will include:</p>
<ul>
<li><p>Own and manage Aladdin channels across Data Interfaces, Data Conversions, and Data Cloud (ADC) Deployments</p>
</li>
<li><p>Liaise with relevant client counterparts, including senior technology managers responsible for delivering the required integration and migration builds</p>
</li>
<li><p>Work with client teams to create the necessary plans and trackers, and coordinate resources and updates</p>
</li>
<li><p>Lead future state architecture sessions with client teams, to drive project scope and identify challenges</p>
</li>
<li><p>Lead and assist in interface design and data mapping sessions, support client development needs</p>
</li>
<li><p>Support the enablement of our clients working with Aladdin Studio technology and Aladdin data</p>
</li>
<li><p>Gain an in-depth knowledge of Aladdin functionality and workflows to ensure the correct integration solutions are deployed and utilized</p>
</li>
<li><p>Work closely with partners across the Aladdin business and platform to support client development needs</p>
</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li><p>Bachelor’s or Master’s degree in engineering, Computer Sciences, Mathematics, or a related quantitative field</p>
</li>
<li><p>Fluency in English and excellent communication skills, with the ability to convey concepts clearly and simply</p>
</li>
<li><p>Experience with financial systems integration, cloud-based computing, or automation tools and methodologies</p>
</li>
<li><p>Understanding of finance and public or private markets, the investment lifecycle, and associated workflows</p>
</li>
<li><p>Demonstrated problem-solving prowess and a proactive mindset</p>
</li>
<li><p>Well-organized with strong project management capabilities</p>
</li>
<li><p>Ability to work collaboratively and independently within a team environment</p>
</li>
<li><p>Proficiency in SQL and a working knowledge of ETL processes is a must have</p>
</li>
<li><p>Familiarity with programming languages such as Python or R, APIs, and data transformation tools such as Alteryx Designer, is highly regarded</p>
</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p>About BlackRock</p>
<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data integration, data conversion, Aladdin Studio platform, SQL, ETL processes, Python, R, APIs, data transformation tools, financial systems integration, cloud-based computing, automation tools and methodologies, Alteryx Designer</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that provides a range of investment products and services to institutional and individual investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/g8YUpXA9aV6CnPMQkdnvSM/act-integration-solutions-specialist-%E2%80%93-aladdin-platform%2C-associate-in-edinburgh-at-blackrock</Applyto>
      <Location>Edinburgh</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>9475bb73-df7</externalid>
      <Title>Product Owner, Enrichment</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p><strong>About the role</strong></p>
<p>We are looking for a Product Owner, Enrichment to own and drive the strategy, architecture, and execution of our data enrichment ecosystem. This role sits at the intersection of Revenue Operations, Data Engineering, and Go-to-Market strategy, and is responsible for building and maintaining a best-in-class enrichment infrastructure that delivers a reliable, comprehensive source of truth for company and contact data across global markets.</p>
<p>You will be the subject matter expert and product owner for all enrichment tools, data sources, and processes—including platforms like Clay, Dun &amp; Bradstreet, ZoomInfo, and other third-party providers. You will design and operate the systems that power account hierarchies, firmographic enrichment, contact discovery, and signal detection, ensuring our GTM teams have the accurate, complete data they need to identify, prioritise, and close business.</p>
<p>This is a hands-on, technically-oriented role that requires deep experience working with large datasets, complex system integrations, and Salesforce data modelling. You will collaborate closely with Sales, Marketing, Data Science, Data Engineering, and Revenue Operations to ensure our enrichment strategy supports both near-term GTM execution and long-term data infrastructure goals.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Own the end-to-end enrichment strategy and roadmap, serving as the product owner for all enrichment tools, vendors, and data sources including Clay, Dun &amp; Bradstreet, ZoomInfo, and emerging providers</li>
</ul>
<ul>
<li>Build and maintain a unified enrichment master—a reliable source of truth for company and person data including parent-child account hierarchies, firmographics, technographics, and contact intelligence across domestic and international markets</li>
</ul>
<ul>
<li>Design and implement waterfall enrichment workflows that orchestrate multiple data providers to maximise coverage, accuracy, and cost efficiency while minimising redundancy</li>
</ul>
<ul>
<li>Architect enrichment data models within Salesforce, making strategic decisions about how enrichment data is stored, related, and surfaced (e.g., custom objects vs. direct field integration, parent account structures, enrichment audit trails)</li>
</ul>
<ul>
<li>Hands-on data manipulation and transformation—write queries, build data pipelines, and work directly with data warehouses (e.g., Snowflake, BigQuery) to clean, transform, match, and deduplicate enrichment data at scale</li>
</ul>
<ul>
<li>Lead international enrichment strategy, addressing the unique challenges of enriching company and contact data across global markets with varying data availability, provider coverage, and regulatory requirements</li>
</ul>
<ul>
<li>Partner with Data Science and Data Engineering to define enrichment schemas, resolve entity matching challenges, and build scalable infrastructure that supports both real-time and batch enrichment processes</li>
</ul>
<ul>
<li>Collaborate with Sales, Marketing, and Revenue Operations to understand GTM data needs, translate business requirements into enrichment solutions, and ensure enrichment outputs directly support pipeline generation, territory planning, lead routing, and account scoring</li>
</ul>
<ul>
<li>Define and track enrichment KPIs including match rates, data completeness, freshness, accuracy, and downstream GTM impact—using metrics to continuously improve the enrichment ecosystem</li>
</ul>
<ul>
<li>Evaluate and onboard new enrichment vendors and data sources, conducting proof-of-concept testing and negotiating contracts in partnership with procurement</li>
</ul>
<ul>
<li>Explore and implement AI-powered enrichment capabilities, including prompt-based enrichment using LLMs to supplement traditional data providers for emerging companies, startups, and hard-to-enrich segments</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>10+ years of experience in data enrichment, data operations, or revenue/marketing operations with hands-on ownership of enrichment tools and strategy in a B2B SaaS or enterprise technology environment</li>
</ul>
<ul>
<li>Deep expertise with enrichment platforms such as Clay, Dun &amp; Bradstreet (D-U-N-S, Data Blocks, hierarchies), ZoomInfo, Clearbit, People Data Labs, or comparable providers, including experience building waterfall enrichment workflows and enrichment masters</li>
</ul>
<ul>
<li>Strong Salesforce experience (required)—including data modelling for enrichment (custom objects, account hierarchies, parent-child relationships), integration architecture, and understanding of how enrichment data flows through the CRM to support GTM processes</li>
</ul>
<ul>
<li>Hands-on technical skills for data manipulation including SQL proficiency, experience with data warehouses (Snowflake, BigQuery, or similar), and comfort working with ETL/reverse ETL pipelines, APIs, and data transformation tools</li>
</ul>
<ul>
<li>Strong product ownership mindset with experience managing roadmaps, backlogs, and stakeholder priorities—able to translate business needs into technical requirements and drive execution across cross-functional teams</li>
</ul>
<ul>
<li>Dual data + RevOps mindset—equally comfortable working with Data Science and Data Engineering on infrastructure and schema design as you are partnering with Sales and GTM teams on pipeline and territory optimisation</li>
</ul>
<ul>
<li>Excellent communication skills to bridge technical and business audiences, lead stakeholder discovery sessions, and present enrichment strategy and impact to leadership</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience building or leveraging AI-powered enrichment prompts (e.g., using LLMs to research and enrich company data, identify signals, or fill gaps where traditional providers lack coverage)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data enrichment, data operations, revenue/marketing operations, enrichment tools, data sources, platforms like Clay, Dun &amp; Bradstreet, ZoomInfo, Salesforce, data modelling, integration architecture, SQL, data warehouses, ETL/reverse ETL pipelines, APIs, data transformation tools, product ownership, roadmaps, backlogs, stakeholder priorities, data science, data engineering, infrastructure, schema design, communication, technical and business audiences, AI-powered enrichment, LLMs, prompt-based enrichment, emerging companies, startups, hard-to-enrich segments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation that aims to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5127289008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>1c41294a-122</externalid>
      <Title>Partner Director, Global Capgemini Alliance</Title>
      <Description><![CDATA[<p><strong>Partner Director, Global Capgemini Alliance</strong></p>
<p><strong>Location</strong></p>
<p>New York City</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p><strong>Deadline to Apply</strong></p>
<p>March 31, 2026 at 3:00 AM EDT</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$378K – $420K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s GTM Partnerships team builds a strategic, global partner ecosystem designed to accelerate customer success, secure AI adoption, and drive growth in support of OpenAI’s mission toward AGI. We collaborate closely across internal teams to ensure unified strategy and seamless execution.</p>
<p><strong>About the Role</strong></p>
<p>We are hiring a Global Head of Capgemini Alliance to lead and scale OpenAI’s flagship enterprise partnership.</p>
<p>The position carries full responsibility for alliance performance, operating model integrity, and sustained expansion.</p>
<p><strong>In this role you will:</strong></p>
<ul>
<li>Own the global Capgemini alliance P&amp;L, including revenue growth, deployment velocity, and long-term expansion.</li>
</ul>
<ul>
<li>Run OpenAI’s side of the <strong>AI Transformation Alliance,</strong> the operating system for hundreds of joint clients / enterprise AI transformation opportunities</li>
</ul>
<ul>
<li>Serve as OpenAI’s senior executive counterpart to Capgemini’s global leadership.</li>
</ul>
<ul>
<li>Set joint priorities, resolve escalations, and drive coordinated execution at scale.</li>
</ul>
<ul>
<li>Direct allocation of OpenAI’s research, product, and GTM resources across Capgemini-led engagements.</li>
</ul>
<ul>
<li>Govern Capgemini’s access to tiered OpenAI benefits, including engineering engagement, roadmap collaboration, co-build initiatives, and GTM priority, tied to performance.</li>
</ul>
<ul>
<li>Lead co-selling and execution of large, complex enterprise AI Cloud transformation opportunities.</li>
</ul>
<ul>
<li>Engage directly with enterprise executives across procurement, security, legal, risk, and technology functions.</li>
</ul>
<ul>
<li>Ensure Capgemini delivers OpenAI enterprise products as part of consulting cycles from strategy through long-term operations.</li>
</ul>
<ul>
<li>Enforce delivery quality, certification standards, and conflict-of-interest guardrails across Capgemini’s global workforce.</li>
</ul>
<ul>
<li>Define vertical strategy and lighthouse account selection across priority industries.</li>
</ul>
<ul>
<li>Stand up OpenAI’s dedicated Capgemini alliance team across partner management, technical success, enablement, and operations.</li>
</ul>
<ul>
<li>Build leadership bench strength and establish a durable cadence for alliance execution.</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>Bachelor’s degree in Business, Technology, or a related field; equivalent practical experience welcomed. Master degree or MBA preferred.</li>
</ul>
<ul>
<li>8–12+ years of experience in partnerships, channel, business development, or growth roles within SaaS or product-led organizations.</li>
</ul>
<ul>
<li>Deep Capgemini experience with trusted senior-level relationships and strong command of internal decision dynamics.</li>
</ul>
<ul>
<li>Ownership of large-scale, multi-year partnerships with full commercial and operational accountability.</li>
</ul>
<ul>
<li>Global systems-integrator leadership experience across sales, delivery, and enablement.</li>
</ul>
<ul>
<li>Direct leadership in complex enterprise transformation deals involving executive sponsors and enterprise governance.</li>
</ul>
<ul>
<li>Proven ability to design and scale joint operating models between large organizations.</li>
</ul>
<ul>
<li>High-stakes execution leadership with comfort operating under accountability.</li>
</ul>
<ul>
<li>Ability to influence cross-functional leaders across product, sales, legal, communications, and partner organizations.</li>
</ul>
<ul>
<li>Platform and services go-to-market fluency.</li>
</ul>
<ul>
<li>Enterprise AI, cloud, and data transformation literacy with credibility among CIOs, CTOs, and enterprise architects.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of human diversity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$378K – $420K</Salaryrange>
      <Skills>Partnerships, Channel, Business Development, Growth, SaaS, Product-led, Organizations, Capgemini, AI Transformation, Cloud, Data Transformation, Enterprise AI, CIOs, CTOs, Enterprise Architects, Leadership, Strategy, Execution, Accountability, Influence, Communication, Collaboration, Problem-solving, Adaptability, Resilience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/3d8c184d-d9a9-40aa-94d0-548d93585ff8</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>7ca02034-131</externalid>
      <Title>Partner Director, Global Accenture Alliance</Title>
      <Description><![CDATA[<p><strong>Partner Director, Global Accenture Alliance</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p><strong>Compensation</strong></p>
<ul>
<li>$378K – $420K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s GTM Partnerships team builds a strategic, global partner ecosystem designed to accelerate customer success, secure AI adoption, and drive growth in support of OpenAI’s mission toward AGI. We collaborate closely across internal teams to ensure unified strategy and seamless execution.</p>
<p><strong>About the Role</strong></p>
<p>We are hiring a Global Head of Accenture Alliance to lead and scale OpenAI’s flagship enterprise partnership.</p>
<p>The position carries full responsibility for alliance performance, operating model integrity, and sustained expansion.</p>
<p><strong>In this role you will:</strong></p>
<ul>
<li>Own the global Accenture alliance P&amp;L, including revenue growth, deployment velocity, and long-term expansion.</li>
</ul>
<ul>
<li>Run OpenAI’s operating model for the Alliance across joint enterprise programs.</li>
</ul>
<ul>
<li>Serve as OpenAI’s senior executive counterpart to Accenture’s global leadership.</li>
</ul>
<ul>
<li>Set joint priorities, resolve escalations, and drive coordinated execution at scale.</li>
</ul>
<ul>
<li>Direct allocation of OpenAI’s research, product, and GTM resources across Accenture-led engagements.</li>
</ul>
<ul>
<li>Govern Accenture’s access to tiered OpenAI benefits, including engineering engagement, roadmap collaboration, co-build initiatives, and GTM priority, tied to performance.</li>
</ul>
<ul>
<li>Lead co-selling and execution of large, complex enterprise AI Cloud transformation opportunities.</li>
</ul>
<ul>
<li>Engage directly with enterprise executives across procurement, security, legal, risk, and technology functions.</li>
</ul>
<ul>
<li>Ensure Accenture delivers OpenAI enterprise products as part of consulting cycles from strategy through long-term operations.</li>
</ul>
<ul>
<li>Enforce delivery quality, certification standards, and conflict-of-interest guardrails across Accenture’s global workforce.</li>
</ul>
<ul>
<li>Define vertical strategy and lighthouse account selection across priority industries.</li>
</ul>
<ul>
<li>Stand up OpenAI’s dedicated Accenture alliance team across partner management, technical success, enablement, and operations.</li>
</ul>
<ul>
<li>Build leadership bench strength and establish a durable cadence for alliance execution.</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>Bachelor’s degree in Business, Technology, or a related field; equivalent practical experience welcomed. Master degree or MBA preferred.</li>
</ul>
<ul>
<li>8–12+ years of experience in partnerships, channel, business development, or growth roles within SaaS or product-led organizations.</li>
</ul>
<ul>
<li>Deep Accenture experience with trusted senior-level relationships and strong command of internal decision dynamics.</li>
</ul>
<ul>
<li>Ownership of large-scale, multi-year partnerships with full commercial and operational accountability.</li>
</ul>
<ul>
<li>Global systems-integrator leadership experience across sales, delivery, and enablement.</li>
</ul>
<ul>
<li>Direct leadership in complex enterprise transformation deals involving executive sponsors and enterprise governance.</li>
</ul>
<ul>
<li>Proven ability to design and scale joint operating models between large organizations.</li>
</ul>
<ul>
<li>High-stakes execution leadership with comfort operating under accountability.</li>
</ul>
<ul>
<li>Ability to influence cross-functional leaders across product, sales, legal, communications, and partner organizations.</li>
</ul>
<ul>
<li>Platform and services go-to-market fluency.</li>
</ul>
<ul>
<li>Enterprise AI, cloud, and data transformation literacy with credibility among CIOs, CTOs, and enterprise architects.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$378K – $420K • Offers Equity</Salaryrange>
      <Skills>Partnerships, Channel, Business Development, Growth, SaaS, Product-led, Accenture, Global Systems-Integrator, Sales, Delivery, Enablement, Enterprise Transformation, Executive Sponsors, Enterprise Governance, Joint Operating Models, High-Stakes Execution, Accountability, Cross-Functional Leadership, Product, Sales, Legal, Communications, Partner Organizations, Platform, Services Go-to-Market, Enterprise AI, Cloud, Data Transformation, CIOs, CTOs, Enterprise Architects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/628c4ccb-9314-4c2d-826a-51ed585cef5e</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>4a7597fd-d7a</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference. You will collaborate closely with AI/ML engineers, data scientists, and platform teams to build the next generation of AI-driven products.</p>
<ul>
<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>
<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>
<li>Develop high performance data pipelines that process structured, semi-structured, and unstructured data at scale, supporting the various AI applications</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Hands on experience working with Vector/Graph;Neo4j</li>
<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Hands on experience working with Vector/Graph;Neo4j, 3+ years of experience in data engineering, working on AI/ML-driven data architectures, Python, SQL, Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT), Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential.</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global company that creates cutting-edge products and experiences that define the ultimate gameplay. They are guided by their mission &apos;For Gamers. By Gamers.&apos; and are relentlessly pushing boundaries and leading the charge in AI for gaming, shaping the future of the industry.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-01-01</Postedate>
    </job>
    <job>
      <externalid>e5eb908e-6f9</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference.</p>
<ul>
<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>
<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Hands on experience working with Vector/Graph;Neo4j</li>
<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Vector/Graph;Neo4j, data engineering, AI/ML-driven data architectures, Python, SQL, Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global leader in the gaming industry, dedicated to creating cutting-edge products and experiences that define the ultimate gameplay. With a mission to revolutionize the way the world games, Razer is a place to do great work, offering opportunities to make an impact globally while working across a global team located across 5 continents.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2025-12-26</Postedate>
    </job>
  </jobs>
</source>