<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>69e8923b-c16</externalid>
      <Title>Senior Data Scientist</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Data Scientist to join our Research, Analytics &amp; Data Science (RAD) team. Our team uses data and insights to drive evidence-based decision-making, generating actionable insights about our customers, products, and business.</p>
<p>As a Senior Data Scientist, you&#39;ll partner with product teams to help them identify important questions and answer those questions with data. You&#39;ll work closely with product managers, designers, and engineers to develop key product success metrics, set targets, measure results, and outcomes, and size opportunities.</p>
<p>You&#39;ll design, build, and update end-to-end data pipelines, working closely with stakeholders to drive the collection of new data and the refinement of existing data sources and tables. You&#39;ll also partner closely with product researchers to build a holistic understanding of our customers, products, and business.</p>
<p>Increasingly, you&#39;ll use AI-assisted tools to accelerate analysis, coding, and insight generation. You&#39;ll identify opportunities to automate your own workflows and reduce time spent on repetitive tasks. You&#39;ll build scalable data products that enable stakeholders to self-serve insights and raise the bar for how AI is used within RAD.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Partnering with product teams to help them identify important questions and answer those questions with data</li>
<li>Working closely with product managers, designers, and engineers to develop key product success metrics, set targets, measure results, and outcomes, and size opportunities</li>
<li>Designing, building, and updating end-to-end data pipelines</li>
<li>Partnering closely with product researchers to build a holistic understanding of our customers, products, and business</li>
<li>Using AI-assisted tools to accelerate analysis, coding, and insight generation</li>
<li>Identifying opportunities to automate your own workflows and reduce time spent on repetitive tasks</li>
<li>Building scalable data products that enable stakeholders to self-serve insights</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years of experience working with data to solve problems and drive evidence-based decisions</li>
<li>Strong SQL skills and solid grounding in statistics</li>
<li>Experience working closely with product teams</li>
<li>Proven track record of delivering actionable insights that drive measurable impact with minimal supervision</li>
<li>Strong product intuition, business acumen, and ability to connect analysis to strategy</li>
<li>Excellent communication skills (technical and non-technical), with a focus on driving decisions and outcomes</li>
<li>Strong ownership, curiosity, and growth mindset</li>
<li>Experience with a scientific computing language (e.g., Python)</li>
</ul>
<p>Preferred skills include:</p>
<ul>
<li>Experience with data modeling and ETL pipelines (esp. dbt)</li>
<li>Experience building internal tools, data products, or self-serve analytics capabilities</li>
<li>Experience leveraging AI across the data workflow - from ideation and coding to analysis and communication</li>
</ul>
<p>Benefits include:</p>
<ul>
<li>Competitive salary and equity in a fast-growing start-up</li>
<li>Unlimited access to Claude Code and best-in-class AI tools; experimentation &amp; building is encouraged &amp; celebrated</li>
<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</li>
<li>Regular compensation reviews - we reward great work</li>
<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</li>
<li>Open vacation policy and flexible holidays so you can take time off when you need it</li>
<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</li>
<li>MacBooks are our standard, but we’re happy to get you whatever equipment helps you get your job done</li>
</ul>
<p>Experience Level: Senior Employment Type: Full-time Workplace Type: Hybrid Category: Engineering Industry: Technology Salary Range: Competitive salary and equity in a fast-growing start-up Required Skills: SQL, statistics, experience working with product teams, strong product intuition, business acumen, excellent communication skills, strong ownership, curiosity, and growth mindset, experience with a scientific computing language (e.g., Python) Preferred Skills: data modeling and ETL pipelines (esp. dbt), building internal tools, data products, or self-serve analytics capabilities, leveraging AI across the data workflow</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype></Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, statistics, experience working with product teams, strong product intuition, business acumen, excellent communication skills, strong ownership, curiosity, growth mindset, experience with a scientific computing language (e.g., Python), data modeling and ETL pipelines (esp. dbt), building internal tools, data products, or self-serve analytics capabilities, leveraging AI across the data workflow</Skills>
      <Category></Category>
      <Industry></Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is a customer service company that provides AI-powered solutions for businesses. Founded in 2011, it has nearly 30,000 global clients.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7749323</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>158a429c-4d8</externalid>
      <Title>Senior Data Scientist - Product Analytics</Title>
      <Description><![CDATA[<p>We are seeking a Senior Data Scientist to join our Research, Analytics &amp; Data Science (RAD) team. The RAD team uses data and insights to drive evidence-based decision-making. We&#39;re a team of data scientists and product researchers who use data to unlock actionable insights about our customers, products, and business.</p>
<p>As a Senior Data Scientist, you will partner with product teams to help them identify important questions and answer those questions with data. You will work closely with product managers, designers, and engineers to develop key product success metrics, set targets, measure results, and outcomes, and size opportunities.</p>
<p>Your responsibilities will include designing, building, and updating end-to-end data pipelines, working closely with stakeholders to drive the collection of new data and the refinement of existing data sources and tables. You will also partner closely with product researchers to build a holistic understanding of our customers, products, and business.</p>
<p>You will influence our product roadmap and product strategy through experimentation, exploratory analysis, and quantitative research. You will build and automate actionable models and dashboards, craft data stories, and share your findings and recommendations across R&amp;D and the broader company.</p>
<p>You will drive and shape core RAD foundations and help us improve how the RAD org operates.</p>
<p>We are looking for someone with 5+ years of experience working with data to solve problems and drive evidence-based decisions. You should have excellent SQL skills and experience of applying analytical and statistical approaches to problem-solving. You should also have a proven track record of initiating and delivering actionable analysis and insights that drive tangible impact with minimal supervision.</p>
<p>Excellent communication skills (technical and non-technical) and a focus on driving impact are essential. A strong growth mindset and sense of ownership, innate passion, and curiosity are also required.</p>
<p>Experience with a scientific computing language (such as R or Python) is necessary. Experience with BI/Visualization tools like Tableau, Superset, and Looker is a bonus. Experience working with product teams and leveraging AI tools to boost efficiency and creativity across the data science workflow is also desirable.</p>
<p>We offer a competitive salary and equity in a fast-growing start-up. We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen. Regular compensation reviews, life assurance, comprehensive health and dental insurance, open vacation policy, flexible holidays, paid maternity leave, and 6 weeks paternity leave are also part of our benefits package.</p>
<p>Our working policy is hybrid, with employees expected to be in the office at least three days per week. We have a radically open and accepting culture, avoiding divisive subjects to foster a safe and cohesive work environment for everyone.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Analytical and statistical approaches, Scientific computing language (R or Python), BI/Visualization tools (Tableau, Superset, Looker), Product teams experience, AI tools, Data modeling and ETL pipelines, Communication skills (technical and non-technical)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/6317929</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>93c1356c-a95</externalid>
      <Title>Principal Software Engineer, Web Data - Tech Lead</Title>
      <Description><![CDATA[<p>We&#39;re looking for an exceptional Principal Software Engineer to serve as the de facto Technical Lead for our Web Data Acquisition (WDA) team. This is a highly visible, hands-on technical leadership role where you&#39;ll own the architectural direction for crawling systems, evolve and unify crawling platforms into a best-in-class stack, and elevate a high-performing engineering team.</p>
<p>As a Principal Software Engineer, you&#39;ll solve complex distributed systems challenges, build modular tooling that accelerates delivery, and set the standard for observability and operational excellence. You&#39;ll have a dedicated manager handling all HR and administrative responsibilities. A product manager connects business needs with technical work. Your focus is 100% technical leadership, mentorship, and hands-on execution.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Technical Leadership &amp; System Design: Proven experience building web crawling or large-scale data systems from scratch. Strong architectural skills designing scalable, fault-tolerant distributed systems. Track record leading complex technical initiatives and driving architecture direction for teams.</li>
</ul>
<ul>
<li>Data Engineering Expertise: Deep background in large-scale data engineering (terabytes daily). Hands-on experience with cloud data warehouses (BigQuery, Snowflake). Experience with Apache Kafka, Kubernetes (GKE/EKS), and orchestration tools (Airflow).</li>
</ul>
<ul>
<li>Web Crawling &amp; Data Extraction: Deep expertise in web crawling technologies and advanced scraping (Scrapy or similar). Experience extracting structured/unstructured web data and SERP extraction. Knowledge of proxy infrastructure management, anti-bot detection, and ethical crawling.</li>
</ul>
<ul>
<li>Leadership &amp; Team Development: Experience mentoring engineers at all levels and fostering collaborative culture. Strong ability to influence technical direction and establish best practices. Track record hiring, coaching, and developing senior engineers.</li>
</ul>
<p>Ideal Candidate Profile:</p>
<ul>
<li>10+ years software engineering experience. 5+ years focused on data engineering. 3+ years in senior/principal-level technical leadership.</li>
</ul>
<ul>
<li>Strong CS fundamentals (algorithms, data structures, distributed systems). Self-starter who thrives in fast-paced environments.</li>
</ul>
<p>Core Technical Stack:</p>
<ul>
<li>Python &amp; Java</li>
<li>Apache Kafka</li>
<li>GCP (BigQuery, GKE, Vertex AI)</li>
<li>Snowflake &amp; Starburst/Trino</li>
<li>Terraform</li>
<li>Scrapy / Web Scraping Frameworks</li>
<li>Proxy Management Systems</li>
<li>Distributed Systems &amp; Kubernetes</li>
<li>Apache Airflow</li>
<li>Large-Scale ETL Pipelines</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Python, Java, Apache Kafka, Kubernetes, GCP, Snowflake, Terraform, Scrapy, Proxy Management Systems, Distributed Systems, Apache Airflow, Large-Scale ETL Pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8378092002</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b1fa4435-fc2</externalid>
      <Title>Business Systems Analyst, Data Enrichment</Title>
      <Description><![CDATA[<p>We are seeking a Business Systems Analyst, Data Enrichment to own and drive the strategy, architecture, and execution of our data enrichment ecosystem. This role sits at the intersection of Revenue Operations, Data Engineering, and Go-to-Market strategy, and is responsible for building and maintaining a best-in-class enrichment infrastructure that delivers a reliable, comprehensive source of truth for company and contact data across global markets.</p>
<p>You will be the subject matter expert and product owner for all enrichment tools, data sources, and processes,including platforms like Clay, Dun &amp; Bradstreet, ZoomInfo, and other third-party providers. You will design and operate the systems that power account hierarchies, firmographic enrichment, contact discovery, and signal detection, ensuring our GTM teams have the accurate, complete data they need to identify, prioritize, and close business.</p>
<p>This is a hands-on, technically-oriented role that requires deep experience working with large datasets, complex system integrations, and Salesforce data modeling. You will collaborate closely with Sales, Marketing, Data Science, Data Engineering, and Revenue Operations to ensure our enrichment strategy supports both near-term GTM execution and long-term data infrastructure goals.</p>
<p>Responsibilities:</p>
<ul>
<li>Own the end-to-end enrichment strategy and roadmap, serving as the product owner for all enrichment tools, vendors, and data sources including Clay, Dun &amp; Bradstreet, ZoomInfo, and emerging providers</li>
</ul>
<ul>
<li>Build and maintain a unified enrichment master,a reliable source of truth for company and person data including parent-child account hierarchies, firmographics, technographics, and contact intelligence across domestic and international markets</li>
</ul>
<ul>
<li>Design and implement waterfall enrichment workflows that orchestrate multiple data providers to maximize coverage, accuracy, and cost efficiency while minimizing redundancy</li>
</ul>
<ul>
<li>Architect enrichment data models within Salesforce, making strategic decisions about how enrichment data is stored, related, and surfaced (e.g., custom objects vs. direct field integration, parent account structures, enrichment audit trails)</li>
</ul>
<ul>
<li>Hands-on data manipulation and transformation,write queries, build data pipelines, and work directly with data warehouses (e.g., Snowflake, BigQuery) to clean, transform, match, and deduplicate enrichment data at scale</li>
</ul>
<ul>
<li>Lead international enrichment strategy, addressing the unique challenges of enriching company and contact data across global markets with varying data availability, provider coverage, and regulatory requirements</li>
</ul>
<ul>
<li>Partner with Data Science and Data Engineering to define enrichment schemas, resolve entity matching challenges, and build scalable infrastructure that supports both real-time and batch enrichment processes</li>
</ul>
<ul>
<li>Collaborate with Sales, Marketing, and Revenue Operations to understand GTM data needs, translate business requirements into enrichment solutions, and ensure enrichment outputs directly support pipeline generation, territory planning, lead routing, and account scoring</li>
</ul>
<ul>
<li>Define and track enrichment KPIs including match rates, data completeness, freshness, accuracy, and downstream GTM impact,using metrics to continuously improve the enrichment ecosystem</li>
</ul>
<ul>
<li>Evaluate and onboard new enrichment vendors and data sources, conducting proof-of-concept testing and negotiating contracts in partnership with procurement</li>
</ul>
<ul>
<li>Explore and implement AI-powered enrichment capabilities, including prompt-based enrichment using LLMs to supplement traditional data providers for emerging companies, startups, and hard-to-enrich segments</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>10+ years of experience in data enrichment, data operations, or revenue/marketing operations with hands-on ownership of enrichment tools and strategy in a B2B SaaS or enterprise technology environment</li>
</ul>
<ul>
<li>Deep expertise with enrichment platforms such as Clay, Dun &amp; Bradstreet (D-U-N-S, Data Blocks, hierarchies), ZoomInfo, Clearbit, People Data Labs, or comparable providers, including experience building waterfall enrichment workflows and enrichment masters</li>
</ul>
<ul>
<li>Strong Salesforce experience (required),including data modeling for enrichment (custom objects, account hierarchies, parent-child relationships), integration architecture, and understanding of how enrichment data flows through the CRM to support GTM processes</li>
</ul>
<ul>
<li>Hands-on technical skills for data manipulation including SQL proficiency, experience with data warehouses (Snowflake, BigQuery, or similar), and comfort working with ETL/reverse ETL pipelines, APIs, and data transformation tools</li>
</ul>
<ul>
<li>Strong product ownership mindset with experience managing roadmaps, backlogs, and stakeholder priorities,able to translate business needs into technical requirements and drive execution across cross-functional teams</li>
</ul>
<ul>
<li>Dual data + RevOps mindset,equally comfortable working with Data Science and Data Engineering on infrastructure and schema design as you are partnering with Sales and GTM teams on pipeline and territory optimization</li>
</ul>
<ul>
<li>Excellent communication skills to bridge technical and business audiences, lead stakeholder discovery sessions, and present enrichment strategy and impact to leadership</li>
</ul>
<p>Strong candidates may have:</p>
<ul>
<li>Experience building or leveraging AI-powered enrichment prompts (e.g., using LLMs to research and enrich company data, identify signals, or fill gaps where traditional providers lack coverage)</li>
</ul>
<ul>
<li>Familiarity with data quality and MDM (Master Data Management) frameworks and tools</li>
</ul>
<ul>
<li>Experience with routing and scoring tools such as LeanData, and marketing automation platforms</li>
</ul>
<ul>
<li>Background in startup signal detection,identifying high-potential early-stage companies through funding, hiring, technographic, and intent signals</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $190,000-$270,000 USD</p>
<p>Logistics</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>
<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>
<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you’re interested in this work. We think AI systems like the one</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,000-$270,000 USD</Salaryrange>
      <Skills>data enrichment, data operations, revenue/marketing operations, enrichment tools, enrichment strategy, salesforce, sql, data warehouses, etl/reverse etl pipelines, apis, data transformation tools, product ownership, roadmaps, backlogs, stakeholder priorities, technical requirements, cross-functional teams, data science, data engineering, infrastructure, schema design, pipeline and territory optimization, communication skills, technical and business audiences, stakeholder discovery sessions, present enrichment strategy and impact to leadership, ai-powered enrichment, llms, prompt-based enrichment, emerging companies, startups, hard-to-enrich segments, data quality, mdm frameworks, routing and scoring tools, marketing automation platforms, startup signal detection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5127289008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>85f1ada0-78d</externalid>
      <Title>Security Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Security Engineer at the senior-level or above on our Security Operations team with strong detection engineering experience. You&#39;ll design and develop high-fidelity detection content, build and operate the data pipelines that power our security operations, develop automation playbooks that accelerate response, and work across a uniquely diverse telemetry landscape spanning cloud infrastructure, embedded vessel platforms, corporate systems, and operational technology.</p>
<p>This role is heavily weighted toward detection engineering. You should think in terms of adversary behaviour and telemetry coverage, not just alert triage. You&#39;ll own detections end-to-end: from identifying gaps in coverage, through designing and testing detection logic, to tuning and validating in production.</p>
<p>Key Responsibilities:</p>
<ul>
<li><p>Design, build, test, and tune high-fidelity detection rules and analytic queries across endpoint, cloud, network, identity, and DLP telemetry sources</p>
</li>
<li><p>Develop and maintain detection content using detection-as-code practices including version-controlled logic, automated testing, and CI/CD deployment</p>
</li>
<li><p>Map detection coverage to MITRE ATT&amp;CK, identify gaps, and prioritise new detection development based on threat intelligence and business risk</p>
</li>
<li><p>Engineer correlation rules, behavioural analytics, and anomaly-based detections that minimise false positives while surfacing real adversary tradecraft</p>
</li>
<li><p>Own the detection lifecycle from initial development through production tuning, performance monitoring, and retirement</p>
</li>
<li><p>Build and operate pipelines to ingest, normalise, enrich, and manage security telemetry at scale across diverse data sources, using Terraform and infrastructure-as-code practices to deploy and maintain logging and detection infrastructure</p>
</li>
<li><p>Design and maintain log collection, parsing, and enrichment configurations that ensure the right telemetry is available at the right fidelity for detection and investigation</p>
</li>
<li><p>Evaluate and onboard new telemetry sources as Saronic&#39;s infrastructure and threat landscape evolve</p>
</li>
<li><p>Monitor pipeline health, data quality, and ingestion reliability to ensure detections operate on complete and accurate data</p>
</li>
<li><p>Develop and manage automated response playbooks in SOAR platforms to accelerate containment and reduce analyst toil</p>
</li>
<li><p>Build automation that enriches alerts with contextual data, reducing investigation time and improving analyst decision-making</p>
</li>
<li><p>Support incident response efforts and translate lessons learned into improved detections and playbooks</p>
</li>
<li><p>Partner with SOC analysts, Cloud Security, Product Security, and IT teams to close visibility and detection gaps across environments</p>
</li>
<li><p>Collaborate with threat intelligence to ensure detection engineering is informed by current adversary TTPs relevant to defence, maritime, and autonomous systems</p>
</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li><p>3+ years of hands-on experience in detection engineering, security operations, security automation, or a closely related security engineering role</p>
</li>
<li><p>Demonstrated experience designing, testing, and tuning detection rules and analytic queries across production security telemetry (endpoint, cloud, network, identity, or DLP)</p>
</li>
<li><p>Hands-on experience with SIEM platforms and proficiency with query languages such as SPL, KQL, or equivalent</p>
</li>
<li><p>Experience building and operating security data pipelines, including log ingestion, normalisation, enrichment, and data quality management</p>
</li>
<li><p>Understanding of data engineering concepts including ETL pipelines, data modelling, schema design, and indexing as applied to security telemetry</p>
</li>
<li><p>Hands-on coding experience in Python, PowerShell, Go, or Rust for security automation, detection tooling, or pipeline development, and familiarity with Terraform for managing detection and logging infrastructure as code</p>
</li>
<li><p>Understanding of MITRE ATT&amp;CK framework and its application to detection coverage and gap analysis</p>
</li>
<li><p>Ability to obtain and maintain a security clearance</p>
</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li><p>Experience in defence, aerospace, robotics, autonomy, or other high-assurance environments</p>
</li>
<li><p>Experience with EDR platforms including custom detection rule creation and telemetry analysis</p>
</li>
<li><p>Experience with cloud-native detection in AWS and Microsoft 365/Azure</p>
</li>
<li><p>Experience using Terraform to deploy and manage security monitoring infrastructure, log pipeline components, or cloud-native security service configurations</p>
</li>
<li><p>Hands-on experience with incident response, threat hunting, or adversary emulation</p>
</li>
<li><p>Exposure to embedded Linux, operational technology, or ICS telemetry and detection</p>
</li>
<li><p>Familiarity with NIST SP 800-171, NIST SP 800-53, or CMMC and their logging and monitoring requirements</p>
</li>
<li><p>Relevant certifications such as GCIH, GCIA, GCDA, GSOM, OSDA, or OSCP</p>
</li>
</ul>
<p>Additional Information:</p>
<ul>
<li><p>Benefits: Medical Insurance, Dental and Vision Insurance, Time Off, Parental Leave, Competitive Salary, Retirement Plan, Stock Options, Life and Disability Insurance, Pet Insurance</p>
</li>
<li><p>This role requires access to export-controlled information or items that require &#39;U.S. Person&#39; status.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>detection engineering, security operations, security automation, SIEM platforms, query languages, data engineering, ETL pipelines, data modelling, schema design, indexing, Python, PowerShell, Go, Rust, Terraform, MITRE ATT&amp;CK framework, security clearance, EDR platforms, cloud-native detection, incident response, threat hunting, adversary emulation, embedded Linux, operational technology, ICS telemetry, NIST SP 800-171, NIST SP 800-53, CMMC, GCIH, GCIA, GCDA, GSOM, OSDA, OSCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies is a leader in revolutionizing defense autonomy at sea, dedicated to developing state-of-the-art solutions that enhance maritime operations for the Department of Defense (DoD) through autonomous and intelligent platforms.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/79424778-76c1-41c6-8385-cba5f6ddc50e</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7619176a-424</externalid>
      <Title>Forward Deployed Engineer</Title>
      <Description><![CDATA[<p>You will spend the majority of your time embedded with Hebbia&#39;s most strategic customers, building the last mile of our platform for their specific workflows, data, and domain. This is a hands-on engineering role. You write production code, you ship it, you own it.</p>
<p>As a Forward Deployed Engineer, you are the bridge between Hebbia&#39;s platform and the real-world complexity of our customers&#39; environments. You sit with the customer&#39;s team, understand their hardest problems, and build solutions that make Hebbia indispensable. Then you bring what you&#39;ve learned back to our engineering and product teams to make the platform better for everyone.</p>
<p>This role is for engineers who want to combine deep technical work with direct customer impact. You will see your code create value in days, not months. The FDE team operates at the intersection of engineering and go-to-market. You will work closely with our core engineering team,shared code review, architecture alignment, deploy pipelines,and with our account teams who direct where you deploy and what you focus on. Our team works in person 5 days a week at our offices in NYC and SF.</p>
<p>Responsibilities:</p>
<ul>
<li>Embed with strategic accounts to deeply understand their domain, data, and workflows</li>
<li>Build custom integrations, workflow automations, and domain-specific solutions on top of Hebbia&#39;s platform</li>
<li>Write production code that deploys through our CI/CD pipelines and meets our engineering standards</li>
<li>Own the technical relationship with the customer&#39;s team during your engagement</li>
<li>Prototype fast, validate with the customer, iterate, and ship</li>
<li>Return from engagements and work with engineering and product to generalize reusable patterns into platform capabilities</li>
<li>Participate in code review, on-call rotation, and architecture discussions alongside core engineering</li>
<li>Build connectors to customer data sources and document management systems</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>5+ years software development experience at a venture-backed startup or top technology firm</li>
<li>Strong full-stack engineering skills. You build across the stack: APIs, data pipelines, frontend when needed, infrastructure when needed.</li>
<li>Comfortable working in ambiguity. Customer problems are messy and underspecified. You figure it out.</li>
<li>High customer empathy. You enjoy sitting with users, understanding their workflows, and translating pain points into technical solutions.</li>
<li>Fast and pragmatic. You prototype, validate, and ship in days and weeks, not quarters.</li>
<li>Strong communicator. You are the primary technical point of contact for the customer. You can talk to both engineers and executives.</li>
<li>Experience with cloud platforms (e.g., AWS) and modern backend technologies (Python, TypeScript, Go)</li>
<li>Experience with data integrations, ETL pipelines, or enterprise data systems (S3, Snowflake, SharePoint, etc.) is a plus</li>
<li>Experience with LLMs, RAG systems, or applied AI is a plus but not required</li>
<li>Prior experience in finance, legal, or consulting domains is a plus</li>
<li>Experience with customer-facing engineering roles (solutions engineering, professional services, or similar) is a plus</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 to $300,000</Salaryrange>
      <Skills>Full-stack engineering, Cloud platforms (e.g., AWS), Modern backend technologies (Python, TypeScript, Go), Data integrations, ETL pipelines, or enterprise data systems (S3, Snowflake, SharePoint, etc.), Customer-facing engineering roles (solutions engineering, professional services, or similar), LLMs, RAG systems, or applied AI, Finance, legal, or consulting domains</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform that generates alpha and drives upside for investors and bankers. Founded in 2020, it powers investment decisions for large asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4679338005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>afa2aeaa-57c</externalid>
      <Title>BIE II</Title>
      <Description><![CDATA[<p>Have you ever ordered a product from Amazon and been amazed at how fast it gets to you?</p>
<p>Every day Amazon engineers are relentlessly working to decrease the time between Click to Deliver for your products. The Amazon Fulfillment Technologies (AFT) team owns all of the software and infrastructure which powers Amazon&#39;s world-class fulfillment engine. Our team is building complex, massive data systems to capture data during every step in the automated pipeline and use that data to proactively predict efficiency and cost improvements to deliver the packages fast to our customers.</p>
<p>We are currently in search of a brilliant, self-driven, and seasoned BIE II to join our team. In this role, you will have the opportunity to work on building scalable solutions, including extensive data models and complex ETL pipelines and utilize your expertise to raise the bar on data timeliness, discoverability and availability of the same.</p>
<p><strong>Key Job Responsibilities</strong></p>
<ul>
<li>Own the development, and maintenance of ongoing metrics, reports, analyses, dashboards on the key drivers of our business</li>
<li>Partner with Product Managers and business teams to consult, develop and implement KPI’s, automated reporting solutions and infrastructure improvements to meet business needs</li>
<li>Develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support business needs</li>
<li>Perform both ad-hoc and strategic analyses</li>
<li>Strong verbal/written communication and presentation skills, including an ability to effectively communicate with both business and technical teams.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience</li>
<li>Experience with data visualization using Tableau, Quicksight, or similar tools</li>
<li>Experience with data modeling, warehousing and building ETL pipelines</li>
<li>Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift</li>
<li>Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets</li>
<li>Master&#39;s degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data analysis, data visualization, ETL pipelines, data modeling, SQL, Python, AWS solutions, data mining, large-scale databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Amazon</Employername>
      <Employerlogo>https://logos.yubhub.co/amazon.jobs.png</Employerlogo>
      <Employerdescription>Amazon is a multinational technology company that focuses on e-commerce, cloud computing, digital streaming, and artificial intelligence. It is one of the world&apos;s largest and most valuable companies.</Employerdescription>
      <Employerwebsite>https://amazon.jobs</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://amazon.jobs/en/jobs/3197763/bi-engineer-aft-bi</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>2b37c983-438</externalid>
      <Title>Associate Data Engineer</Title>
      <Description><![CDATA[<p><strong><strong>Job Description</strong></strong></p>
<p>You&#39;ll be joining the Applications team, which is an Engineering function within Quantexa&#39;s R&amp;D department that is focused on internally building real-world applications of the Quantexa Platform.</p>
<p>This function enables demonstrations of the product and develops SaaS offerings, whilst also testing and refining new Platform features before they are deployed by clients. The function develops and releases its own tools that feed into these internal applications and are also packaged and released to help standardize and accelerate all Quantexa deployments.</p>
<p>It encompasses four distinct sub-teams:</p>
<p><strong><strong>Data Engineering Accelerators</strong></strong></p>
<ul>
<li>Developing Quantexa&#39;s libraries for cleansing, parsing and standardising data used in entity resolution</li>
<li>Finding efficiency/performance improvements through big data testing and building performance tooling</li>
<li>Owning best practices in entity resolution and network building</li>
</ul>
<p><strong><strong>Data Feeds</strong></strong></p>
<ul>
<li>Building standardised and reusable code for processing various third party/open source data sets</li>
<li>Managing an internal data lake for the provision of this data by other teams for testing and analytics</li>
<li>Owning general best practices for ingesting and processing data to get it ready for use in the Quantexa Platform, including pipelines and scheduling</li>
</ul>
<p><strong><strong>Demos</strong></strong></p>
<ul>
<li>Developing, deploying and maintaining all Quantexa demos, showcasing the different use cases for the Quantexa Platform</li>
<li>Owning the Quantexa Trial platform, for prospective Quantexa clients to see the product in action using real data provided by Data Feeds</li>
<li>Building tools to enable solution owners and sales to create their own custom demos</li>
</ul>
<p><strong><strong>SaaS</strong></strong></p>
<ul>
<li>Building Quantexa&#39;s emerging SaaS offering, a cloud hosted, standardized deployment of the Quantexa Platform</li>
<li>Targeting mid-market banks in the US for Retail AML initially, providing them with a cost-effective Quantexa solution, then expanding in future to more use cases and geographies</li>
<li>Implementing cutting edge features of the Quantexa Platform ensuring SaaS customers always on the latest and greatest of Quantexa</li>
</ul>
<p><strong><strong>Requirements</strong></strong></p>
<ul>
<li>Data processing/ETL pipelines</li>
<li>Analysing and examining real and varied data</li>
<li>Full stack development, but with a heavy focus on the data processing/ETL side</li>
<li>Solving difficult problems with efficient, resilient, high impact code</li>
<li>Working in the cloud with production-grade systems</li>
<li>Defining best-practices and sharing expertise you’ve developed</li>
<li>Working in a fast moving, Agile environment</li>
<li>Growing and thriving within one of the UK’s fastest growing scale-ups</li>
</ul>
<p><strong><strong>Experience in the following would be beneficial:</strong></strong></p>
<ul>
<li>A strong coding background, ideally in Scala or otherwise in a relevant language that will allow you to learn Scala quickly (e.g. Java/Python)</li>
<li>Big data, either from a software deployment/implementation or a data science perspective</li>
<li>Working with big data technology, ideally Spark but others will also be useful such as Airflow or Elasticsearch</li>
<li>Working in an Agile environment</li>
<li>Building data processing pipelines for use in production batch systems, including either traditional ETL pipelines and/or analytics pipelines</li>
<li>Manipulating data through cleansing, parsing, standardising etc, especially in relation to improving data quality/integrity</li>
<li>Building and deploying SaaS products</li>
</ul>
<p><strong><strong>Benefits</strong></strong></p>
<ul>
<li>Competitive salary and Company Bonus</li>
<li>Flexible working hours in a hybrid workplace &amp; free access to global WeWork locations &amp; events</li>
<li>Pension Scheme with a company contribution of 6% (if you contribute 3%)</li>
<li>25 days annual leave (with the option to buy up to 5 days) + birthday off!</li>
<li>Work from Anywhere Scheme: Spend up to 2 months working outside of your country of employment over a rolling 12-month period</li>
<li>Family: Enhanced Maternity, Paternity, Adoption, or Shared Parental Leave</li>
<li>Private Healthcare with AXA</li>
<li>EAP, Well-being Days, Gym Discounts</li>
<li>Free Calm App Subscription #1 app for meditation, relaxation and sleep</li>
<li>Workplace Nursery Scheme</li>
<li>Team&#39;s Social Budget &amp; Company-wide Summer &amp; Winter Parties</li>
<li>Tech &amp; Cycle-to-Work Schemes</li>
<li>Volunteer Day off</li>
<li>Dog-friendly Offices</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data processing/ETL pipelines, Analysing and examining real and varied data, Full stack development, Solving difficult problems with efficient, resilient, high impact code, Working in the cloud with production-grade systems, Defining best-practices and sharing expertise you’ve developed, Working in a fast moving, Agile environment, Growing and thriving within one of the UK’s fastest growing scale-ups, A strong coding background, ideally in Scala or otherwise in a relevant language that will allow you to learn Scala quickly (e.g. Java/Python), Big data, either from a software deployment/implementation or a data science perspective, Working with big data technology, ideally Spark but others will also be useful such as Airflow or Elasticsearch, Working in an Agile environment, Building data processing pipelines for use in production batch systems, including either traditional ETL pipelines and/or analytics pipelines, Manipulating data through cleansing, parsing, standardising etc, especially in relation to improving data quality/integrity, Building and deploying SaaS products</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Quantexa</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Quantexa is a technology company that has been innovating the data analytics market since 2016. It started out in FinTech, helping tackle serious criminal activity, and now its potential is virtually limitless.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/ibgyYHfEjnWon7xfTj4nmA/associate-data-engineer-in-london-at-quantexa</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>9475bb73-df7</externalid>
      <Title>Product Owner, Enrichment</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p><strong>About the role</strong></p>
<p>We are looking for a Product Owner, Enrichment to own and drive the strategy, architecture, and execution of our data enrichment ecosystem. This role sits at the intersection of Revenue Operations, Data Engineering, and Go-to-Market strategy, and is responsible for building and maintaining a best-in-class enrichment infrastructure that delivers a reliable, comprehensive source of truth for company and contact data across global markets.</p>
<p>You will be the subject matter expert and product owner for all enrichment tools, data sources, and processes—including platforms like Clay, Dun &amp; Bradstreet, ZoomInfo, and other third-party providers. You will design and operate the systems that power account hierarchies, firmographic enrichment, contact discovery, and signal detection, ensuring our GTM teams have the accurate, complete data they need to identify, prioritise, and close business.</p>
<p>This is a hands-on, technically-oriented role that requires deep experience working with large datasets, complex system integrations, and Salesforce data modelling. You will collaborate closely with Sales, Marketing, Data Science, Data Engineering, and Revenue Operations to ensure our enrichment strategy supports both near-term GTM execution and long-term data infrastructure goals.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Own the end-to-end enrichment strategy and roadmap, serving as the product owner for all enrichment tools, vendors, and data sources including Clay, Dun &amp; Bradstreet, ZoomInfo, and emerging providers</li>
</ul>
<ul>
<li>Build and maintain a unified enrichment master—a reliable source of truth for company and person data including parent-child account hierarchies, firmographics, technographics, and contact intelligence across domestic and international markets</li>
</ul>
<ul>
<li>Design and implement waterfall enrichment workflows that orchestrate multiple data providers to maximise coverage, accuracy, and cost efficiency while minimising redundancy</li>
</ul>
<ul>
<li>Architect enrichment data models within Salesforce, making strategic decisions about how enrichment data is stored, related, and surfaced (e.g., custom objects vs. direct field integration, parent account structures, enrichment audit trails)</li>
</ul>
<ul>
<li>Hands-on data manipulation and transformation—write queries, build data pipelines, and work directly with data warehouses (e.g., Snowflake, BigQuery) to clean, transform, match, and deduplicate enrichment data at scale</li>
</ul>
<ul>
<li>Lead international enrichment strategy, addressing the unique challenges of enriching company and contact data across global markets with varying data availability, provider coverage, and regulatory requirements</li>
</ul>
<ul>
<li>Partner with Data Science and Data Engineering to define enrichment schemas, resolve entity matching challenges, and build scalable infrastructure that supports both real-time and batch enrichment processes</li>
</ul>
<ul>
<li>Collaborate with Sales, Marketing, and Revenue Operations to understand GTM data needs, translate business requirements into enrichment solutions, and ensure enrichment outputs directly support pipeline generation, territory planning, lead routing, and account scoring</li>
</ul>
<ul>
<li>Define and track enrichment KPIs including match rates, data completeness, freshness, accuracy, and downstream GTM impact—using metrics to continuously improve the enrichment ecosystem</li>
</ul>
<ul>
<li>Evaluate and onboard new enrichment vendors and data sources, conducting proof-of-concept testing and negotiating contracts in partnership with procurement</li>
</ul>
<ul>
<li>Explore and implement AI-powered enrichment capabilities, including prompt-based enrichment using LLMs to supplement traditional data providers for emerging companies, startups, and hard-to-enrich segments</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>10+ years of experience in data enrichment, data operations, or revenue/marketing operations with hands-on ownership of enrichment tools and strategy in a B2B SaaS or enterprise technology environment</li>
</ul>
<ul>
<li>Deep expertise with enrichment platforms such as Clay, Dun &amp; Bradstreet (D-U-N-S, Data Blocks, hierarchies), ZoomInfo, Clearbit, People Data Labs, or comparable providers, including experience building waterfall enrichment workflows and enrichment masters</li>
</ul>
<ul>
<li>Strong Salesforce experience (required)—including data modelling for enrichment (custom objects, account hierarchies, parent-child relationships), integration architecture, and understanding of how enrichment data flows through the CRM to support GTM processes</li>
</ul>
<ul>
<li>Hands-on technical skills for data manipulation including SQL proficiency, experience with data warehouses (Snowflake, BigQuery, or similar), and comfort working with ETL/reverse ETL pipelines, APIs, and data transformation tools</li>
</ul>
<ul>
<li>Strong product ownership mindset with experience managing roadmaps, backlogs, and stakeholder priorities—able to translate business needs into technical requirements and drive execution across cross-functional teams</li>
</ul>
<ul>
<li>Dual data + RevOps mindset—equally comfortable working with Data Science and Data Engineering on infrastructure and schema design as you are partnering with Sales and GTM teams on pipeline and territory optimisation</li>
</ul>
<ul>
<li>Excellent communication skills to bridge technical and business audiences, lead stakeholder discovery sessions, and present enrichment strategy and impact to leadership</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience building or leveraging AI-powered enrichment prompts (e.g., using LLMs to research and enrich company data, identify signals, or fill gaps where traditional providers lack coverage)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data enrichment, data operations, revenue/marketing operations, enrichment tools, data sources, platforms like Clay, Dun &amp; Bradstreet, ZoomInfo, Salesforce, data modelling, integration architecture, SQL, data warehouses, ETL/reverse ETL pipelines, APIs, data transformation tools, product ownership, roadmaps, backlogs, stakeholder priorities, data science, data engineering, infrastructure, schema design, communication, technical and business audiences, AI-powered enrichment, LLMs, prompt-based enrichment, emerging companies, startups, hard-to-enrich segments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation that aims to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5127289008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>448a56f3-ab5</externalid>
      <Title>Director of Data Engineering and Agentic AI Automation, Finance</Title>
      <Description><![CDATA[<p><strong>Director of Data Engineering and Agentic AI Automation, Finance</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Finance</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$347K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>We are looking for a Director of Data Engineering and Agentic AI Automation to lead the next generation of our finance data infrastructure. As OpenAI expands its Finance operations, we need scalable and trustworthy data systems to match the pace and complexity of our growth. This includes well-modeled, auditable data for revenue recognition, financial reporting, and planning, supported by reliable pipelines that connect ERP, planning, and operational systems. You will lead a group of analytics engineers, data engineers, and AI engineers to build the data pipelines that connect our internal engineering systems with enterprise platforms such as Oracle Fusion ERP. This role will also define the roadmap for agentic AI automation, enabling intelligent workflows, process automation, and AI-driven decision-making across Finance.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build and maintain scalable, auditable data infrastructure that powers accurate financial information, with a focus on revenue recognition, compute attribution, and close automation.</li>
</ul>
<ul>
<li>Lead and grow teams of analytics engineers, data engineers, and AI engineers to deliver high-impact, intelligent data systems.</li>
</ul>
<ul>
<li>Guide work across financial close and allocations automation, B2C revenue automation from engineering systems to ERP (including reconciliation with cash and source systems), and other mission-critical financial processes.</li>
</ul>
<ul>
<li>Design and implement data pipelines connecting ERP, planning, and operational systems, including Oracle Fusion, Anaplan, and Workday.</li>
</ul>
<ul>
<li>Build and support scalable, audit-proof architecture that enables reliable financial reporting and compliance.</li>
</ul>
<ul>
<li>Develop data and AI-powered workflows that enhance forecasting accuracy, compliance automation, and operational efficiency.</li>
</ul>
<ul>
<li>Create and maintain data marts and products that support stakeholders across Revenue, FP&amp;A, Tax, Procurement, Hardware Accounting, and Controller teams.</li>
</ul>
<ul>
<li>Define and enforce best practices for data modeling, lineage, observability, and reconciliation across finance data domains.</li>
</ul>
<ul>
<li>Set the technical direction and manage team structure, mentoring engineers and overseeing contractors or system integrators to ensure delivery of high-quality outcomes.</li>
</ul>
<ul>
<li>Partner with senior leaders across Finance, Engineering, and Infrastructure to align on priorities and integrate new automation capabilities.</li>
</ul>
<ul>
<li>Ensure data systems are AI-ready and capable of supporting predictive analytics, autonomous agent workflows, and large-scale automation.</li>
</ul>
<ul>
<li>Own and maintain Tier-1 data pipelines with strict SLA, data quality, and compliance standards.</li>
</ul>
<ul>
<li>Drive the long-term roadmap for agentic AI enablement to build the foundation for “Finance on OpenAI.”</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>12+ years in data engineering, with proven experience building and managing enterprise-scale, auditable ETL pipelines and complex datasets</li>
</ul>
<ul>
<li>Proficiency in SQL and Python, with demonstrated experience in schema design, data modeling, and orchestration frameworks</li>
</ul>
<ul>
<li>Expertise in distributed data processing technologies such as Apache Spark, Kafka, and cloud-native storage (e.g., S3, ADLS)</li>
</ul>
<ul>
<li>Deep knowledge of enterprise data architecture, especially within Finance and Supply Chain</li>
</ul>
<ul>
<li>Familiarity with financial processes (close, allocations, revenue recognition) and supply chain data models (Supply and demand planning, procurement, vendor master), along with experience in ingesting data from internal engineering systems with large volumes of B2C</li>
</ul>
<ul>
<li>Experience integrating with contract manufacturers and external logistics providers is a strong plus</li>
</ul>
<ul>
<li>Strong track record of partnering with senior business stakeholders</li>
</ul>
<p><strong>Work Environment</strong></p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$347K – $490K • Offers Equity</Salaryrange>
      <Skills>SQL, Python, Apache Spark, Kafka, cloud-native storage, data modeling, orchestration frameworks, distributed data processing technologies, enterprise data architecture, financial processes, supply chain data models, ETL pipelines, complex datasets, schema design, data engineering, data infrastructure, auditable data, revenue recognition, financial reporting, planning, ERP, planning, operational systems, Oracle Fusion, Anaplan, Workday, data marts, products, stakeholders, Revenue, FP&amp;A, Tax, Procurement, Hardware Accounting, Controller, data modeling, lineage, observability, reconciliation, finance data domains, team structure, engineers, contractors, system integrators, predictive analytics, autonomous agent workflows, large-scale automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in artificial intelligence. It was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/e84e7b7e-a82e-411e-929a-615dc3080280</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>6215398a-2c4</externalid>
      <Title>Senior Software Engineer, Forward Deployed (U.S. Public Sector)</Title>
      <Description><![CDATA[<p><strong>About Invisible</strong></p>
<p>Invisible Technologies makes AI work. Our end-to-end AI platform structures messy data, automates digital workflows, deploys agentic solutions, measures outcomes, and integrates human expertise where it matters most.</p>
<p>Our platform cleans, labels, and structures company data so it is ready for AI. It adapts models to each business and adds human expertise when needed, the same approach we have used to improve models for more than 80% of the world’s top AI companies, including Microsoft, AWS, and Cohere.</p>
<p>Our successes span industries, from supply chain automation for Swiss Gear to AI-enabled naval simulations with SAIC, and validating NBA draft picks for the Charlotte Hornets.</p>
<p>Profitable for more than half a decade, Invisible reached $134M in revenue and ranked as the number two fastest growing AI company on the 2024 Inc. 5000. In September 2025, we raised $100M in growth capital to accelerate our mission of making AI actually work in the enterprise and to advance our platform technology.</p>
<p><strong>About The Role</strong></p>
<p>As a Senior Forward Deployed Engineer (FDE) for our U.S. Public Sector team at Invisible, you&#39;ll lead high-impact, AI-powered solutions that reshape how our clients operate their most critical workflows. You won’t just build and deploy — you’ll drive the strategy, architecture, and execution of end-to-end systems, working directly with client stakeholders and our internal delivery teams.</p>
<p>This is a hybrid role: equal parts AI architect, hands-on engineer, and technical advisor. You’ll work on the front lines with ambitious clients, turning operational challenges into scalable AI workflows. You’ll be trusted to lead complex engagements, make architectural calls, and mentor others across technical and non-technical domains.</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Scope, design, and lead implementation of AI-driven solutions in partnership with delivery teams and executive stakeholders</li>
<li>Translate ambiguous workflows and business needs into repeatable systems and production-ready technical architectures</li>
<li>Lead architecture design and trade-off discussions across performance, scalability, cost, and reliability</li>
<li>Build usable systems from messy data and incomplete or evolving requirements</li>
<li>Apply AI/ML solutions in highly regulated environments (e.g., defense, intelligence, healthcare, finance)</li>
<li>Own projects end-to-end—from initial discovery and scoping through implementation, deployment, and post-launch iteration</li>
<li>Build shared infrastructure, reusable components, and internal playbooks to improve delivery consistency and team velocity</li>
<li>Mentor mid-level engineers and contribute to the development of forward-deployed AI engineering practices at Invisible</li>
</ul>
<p><strong>What We Need</strong></p>
<ul>
<li>Active U.S. Department of Defense Secret clearance or higher</li>
<li>5+ years of software engineering experience, including work on data-intensive, ML, or backend systems</li>
<li>Ability to work on-site 2–3 days per week at offices located in the greater Washington, D.C. and Reston, VA area</li>
<li>Python &amp; ML/LLM frameworks: Hands-on experience with Python and modern ML/LLM tooling (e.g., Hugging Face, LangChain, OpenAI, Pinecone)</li>
<li>Deployment &amp; infrastructure: Experience building and operating API-based services using Docker, FastAPI, Kubernetes, and major cloud platforms (AWS, GCP)</li>
<li>Platform &amp; data management: Familiarity with workflow orchestration, pub/sub systems (e.g., Kafka), schema governance, data contracts, Unity Catalog, Delta/ETL pipelines, and replay processes</li>
<li>Experience leading requirements-gathering activities and translating stakeholder input into technical specifications</li>
</ul>
<p><strong>What’s In It For You</strong></p>
<p>Invisible is committed to fair and competitive pay, ensuring that compensation reflects both market conditions and the value each team member brings. Our salary structure accounts for regional differences in cost of living while maintaining internal equity.</p>
<p>For this position, the annual salary ranges by location are:</p>
<p>Tier 2 Salary Range $164,000 – $240,000USD</p>
<p>You can find more information about our geographic pay tiers here. During the interview process, your Invisible Talent Acquisition Partner will confirm which tier applies to your location. For candidates outside the U.S., compensation is adjusted to reflect local market conditions and cost of living.</p>
<p>Bonuses and equity are included in offers above entry level. Final compensation is determined by a combination of factors, including location, job-related experience, skills, knowledge, internal pay equity, and overall market conditions. Because of this, every offer is unique. Additional details on total compensation and benefits will be discussed during the hiring process</p>
<p><strong>What It&#39;s Like to Work at Invisible:</strong></p>
<p>At Invisible, we’re not just redefining work—we’re reinventing it. We operate at the intersection of advanced AI and human ingenuity, pushing the boundaries of what’s possible to unlock productivity and scale. Ownership is at the core of everything we do. Here, you won’t just execute tasks—you’ll build, innovate, and shape the future alongside world-class clients pushing the boundaries of AI.</p>
<p>We expect bold ideas, relentless drive, and the ability to turn ambiguity into opportunity. The pace is fast, the challenges are big, and the growth is unmatched. We’re not for everyone, and we’re okay with that. If you’re looking for predictable routines, this isn’t the place for you. But if you’re driven to create, thrive in dynamic environments, and want a front-row seat to the AI revolution, you’ll fit right in.</p>
<p>_<strong>Country Hiring Guidelines:</strong>_ _Invisible is a hybrid organization with offices and team members located around the world. While some roles may offer remote flexibility, most positions involve in-office collaboration and are tied to specific locations. Any location-based requirements will be clearly outlined in the job description._</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$164,000 – $240,000USD</Salaryrange>
      <Skills>Python, ML/LLM frameworks, Docker, FastAPI, Kubernetes, AWS, GCP, workflow orchestration, pub/sub systems, schema governance, data contracts, Unity Catalog, Delta/ETL pipelines, replay processes, Hugging Face, LangChain, OpenAI, Pinecone</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Invisible Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/invisible.co.png</Employerlogo>
      <Employerdescription>Invisible Technologies makes AI work. Our end-to-end AI platform structures messy data, automates digital workflows, deploys agentic solutions, measures outcomes, and integrates human expertise where it matters most. Our platform cleans, labels, and structures company data so it is ready for AI.</Employerdescription>
      <Employerwebsite>https://www.invisible.co/join-us/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.eu.greenhouse.io/invisibletech/jobs/4741723101</Applyto>
      <Location>Washington DC–Baltimore</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>