<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>db261609-388</externalid>
      <Title>Principal Data &amp; Ontology Architect - AI Enablement</Title>
      <Description><![CDATA[<p>We are looking for a Principal Data &amp; Ontology Architect to support the implementation and adoption of data and ontology enablement practices and standards within Control Tower Operations to support scalable, governed, and business-aligned AI initiatives.</p>
<p>The successful candidate will serve as the primary bridge between Business Units, Global IT, and Control Tower Operations, ensuring shared understanding of data practices, workflows, and requirements. They will apply established standards for semantic modeling, domain alignment, concept reuse, and ontology lifecycle management.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Supporting the implementation and ongoing maintenance of ontology enablement practices and operating model strategy to support AI, analytics, and digital initiatives across multiple Business Units</li>
<li>Applying established standards for semantic modeling, domain alignment, concept reuse, and ontology lifecycle management</li>
<li>Serving as the enterprise subject-matter authority for ontology-related topics, providing recommendations and guidance to governance and leadership forums</li>
<li>Collaborating with Global IT and enterprise data architecture to ensure ontology practices align with enterprise data platforms and Control Tower operational processes</li>
</ul>
<ul>
<li>Partnering with Business Units to understand domain concepts, terminology, operational data, and AI use cases, translating them into ontology-aligned data structures</li>
<li>Guiding Business Units in contributing domain models, metadata, and data assets into the enterprise ontology using defined governance and intake processes</li>
<li>Enabling repeatable onboarding of Business Unit data into AI initiatives, reducing reliance on ad-hoc IT engagement and minimizing duplicated effort</li>
</ul>
<ul>
<li>Serving as a liaison between Business Units and Global IT for AI data and ontology-related matters</li>
<li>Engaging with Global IT teams to understand enterprise data platforms, workflows, standards, and operational constraints</li>
<li>Translating Global IT practices, requirements, and workflows into clear, actionable guidance for Business Unit data stewards</li>
</ul>
<ul>
<li>Educating, guiding, and supporting Business Unit data stewards on their roles in data governance, ontology contribution, and AI data enablement</li>
<li>Supporting the development and documentation of workflows, expectations, and operating models for how BU data stewards engage with the Control Tower and Global IT</li>
</ul>
<ul>
<li>Ensuring Business Unit Data Stewards understand how to prepare, govern, and submit data assets for ontology integration and AI use</li>
<li>Promoting consistent adoption of governance, quality, and semantic standards across Business Units</li>
</ul>
<ul>
<li>Supporting integration of data and ontology enablement into Control Tower workflows</li>
<li>Providing operational insight into data readiness, semantic risks, and governance gaps to inform Control Tower decision-making</li>
<li>Identifying systemic issues and contributing recommendations to drive continuous improvement of data enablement processes</li>
</ul>
<ul>
<li>Ensuring semantic integrity, data quality, lineage, and consistency are maintained as data assets flow into AI solutions</li>
<li>Identifying systemic issues and recommending continuous improvement opportunities to Control Tower Operations leadership</li>
<li>Influencing corrective actions, tooling investments, or governance updates to mitigate long-term risk</li>
</ul>
<p>This role requires a minimum of 10 years of relevant work experience in data architecture, data governance, ontology development, semantic modeling, or related disciplines, supporting cross-functional initiatives spanning multiple business units and IT organizations.</p>
<p>The ideal candidate will have in-depth expertise in ontology design, semantic modeling, and domain-driven data architecture, as well as experience contributing to the development and implementation of data and ontology strategies. They will also have demonstrated experience serving as a bridge between business stakeholders and IT organizations, with a strong ability to translate technical platforms, workflows, and constraints into business-understandable guidance.</p>
<p>A Bachelor&#39;s level degree or diploma in Computer Science, Data Science/Engineering, Applied Mathematics/Statistics, Electronics/Electrical, Information Technology/Information Sciences, or a related field of study is required. A Master&#39;s or Ph.D. degree is preferred.</p>
<p>The successful candidate will be comfortable operating in ambiguous, evolving environments with enterprise-level impact, and will have a systems-thinking mindset with understanding of AI, analytics, and enterprise data platforms.</p>
<p>Highly desirable skills include proficiency in OWL (Web Ontology Language), RDF/RDFS – graph-based data model, storage in graph databases such as Neo4j or Amazon Neptune, and querying using SPARQL for RDF-based ontologies.</p>
<p>This is an onsite job based at our ADC, Raymond, OH office. One telecommuting workday per week may be possible with prior departmental approval.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$120,400.00 - $150,500.00</Salaryrange>
      <Skills>ontology design, semantic modeling, domain-driven data architecture, data governance, AI data enablement, data quality, lineage, consistency, OWL (Web Ontology Language), RDF/RDFS – graph-based data model, graph databases, Neo4j, Amazon Neptune, SPARQL, ontology development, data architecture, data science, electronics, electrical, information technology, information sciences</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Honda</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.honda.com.png</Employerlogo>
      <Employerdescription>Honda is a multinational Japanese conglomerate that produces automobiles, motorcycles, and power equipment. It is one of the largest automobile manufacturers in the world.</Employerdescription>
      <Employerwebsite>https://careers.honda.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.honda.com/us/en/job/10812/Principal-Data-Ontology-Architect-AI-Enablement</Applyto>
      <Location>Raymond</Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>c6bfc6b4-74f</externalid>
      <Title>Senior Data Scientist - Marketing (all genders)</Title>
      <Description><![CDATA[<p>Join our Business Intelligence Department, a multidisciplinary group of Data Scientists, Analysts, and Data Engineers. Together, we build machine learning and analytics products that directly influence GMV, conversion, and retention.</p>
<p>Within the department, we’re building a new Marketing Analytics team and are looking for a Senior Data Scientist to drive its data science initiatives. In this role, you’ll work closely with Analysts, Engineers, and Marketing stakeholders to develop and productionize advanced machine learning, statistical, and predictive models that improve marketing performance and drive measurable company growth.</p>
<p>As a Senior Data Scientist – Marketing, you’ll take strong ownership of data science initiatives that directly shape our marketing strategy and growth. You will:</p>
<p>Partner closely with Marketing, Marketing Analytics, and Marketing Technology to identify opportunities and translate business questions into scalable data science solutions.</p>
<p>Lead the development of high-impact machine learning and statistical models for marketing use cases such as channel allocation, ad bidding, churn prediction, lifetime value, revenue attribution, and business metrics forecasting.</p>
<p>Work end-to-end - from translating business questions into hypotheses to researching, building, validating, and deploying models.</p>
<p>Run experiments and iterate in production: design A/B tests, monitor model performance, and continuously improve based on measured impact.</p>
<p>Advance our MLOps practices with CI/CD pipelines, retraining workflows, lineage tracking, and documentation.</p>
<p>Help define the team&#39;s roadmap and ways of working as a founding member of Marketing Analytics - your input will help shape this function.</p>
<p>Act as a senior role model in the team, sharing best practices and helping raise the bar for data science at Holidu.</p>
<p>We&#39;re looking for someone with 5+ years of experience as a Data Scientist, with clear ownership of projects that delivered measurable business impact. You should have a degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field, and strong expertise in machine learning, statistics, and predictive analytics, with hands-on experience using Python and SQL.</p>
<p>Experience with marketing data science use cases such as attribution modeling, customer lifetime value prediction, churn modeling, or bid optimization is also required. You should have a solid understanding of marketing concepts across channels (e.g. Performance Marketing, SEO, CRM, Affiliate) and how data science can improve them.</p>
<p>Additionally, you should have experience working with modern data stacks, ideally including AWS (Redshift, Athena, S3), Airflow, dbt, and Git. A collaborative mindset paired with great communication skills is essential, as you&#39;ll need to work with diverse stakeholders and explain complex topics in a simple way.</p>
<p>AI proficiency is also a plus, as you&#39;ll be comfortable using AI to enhance coding, planning, and monitoring, and successfully integrating AI tools (such as Claude code, Codex, Copilot, etc.) into your workflow and teaching others to use them efficiently.</p>
<p>If you&#39;re excited about the opportunity to shape the future of travel with products used by millions of guests and thousands of hosts, apply now!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Machine Learning, Statistics, Predictive Analytics, Python, SQL, Marketing Data Science, Attribution Modeling, Customer Lifetime Value Prediction, Churn Modeling, Bid Optimization, AI, CI/CD Pipelines, Retraining Workflows, Lineage Tracking, Documentation, Airflow, dbt, Git</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that helps users find and book vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2510157</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8f03ad2d-96f</externalid>
      <Title>Software Engineer, Research Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for engineers who love working directly with users and who excel at building data products. The Research Data Platform team builds the tools that Anthropic&#39;s researchers use every day to manage, query, and analyze the data that goes into training and evaluating frontier models.</p>
<p>As a Software Engineer on the Research Data Platform team, you will:</p>
<ul>
<li>Build and operate data pipelines that extract data from research training runs and land it in storage systems that are easy and fast to query</li>
<li>Work closely with researchers to design and build APIs, libraries, and web interfaces that support data management, exploration, and analysis</li>
<li>Develop dataset management, data cataloging, and provenance tooling that researchers use in their day-to-day work</li>
<li>Embed with research teams to understand their workflows, identify high-leverage tooling opportunities, and ship solutions quickly</li>
<li>Collaborate with adjacent teams to build on existing systems rather than reinventing them</li>
</ul>
<p>We do not require prior ML or AI training experience. If you enjoy working closely with technical users, learning new domains quickly, and building tools people actually want to use, you&#39;ll pick up the research context fast.</p>
<p>Strong candidates may also have experience with large-scale ETL, columnar storage formats, and query engines (e.g., Spark, BigQuery, DuckDB, Parquet), high-volume time series data , ingestion, storage, and efficient querying, data cataloging, lineage, or metadata management systems, or ML experiment tracking or metrics platforms.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>large-scale ETL, columnar storage formats, query engines, high-volume time series data, data cataloging, lineage, metadata management systems, ML experiment tracking, Spark, BigQuery, DuckDB, Parquet</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191226008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a57339aa-939</externalid>
      <Title>Staff Data Engineer, tvScientific</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Staff Data Engineer to lead the design, implementation, and evolution of our identity services and data governance platform. This role is critical to ensuring trusted, privacy-safe, and well-governed data across the organization.</p>
<p>You will work at the intersection of data engineering, identity resolution, privacy, and platform reliability. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and maintain a scalable identity resolution platform</li>
<li>Build pipelines and services to ingest, normalize, link, and version identity data across multiple sources</li>
<li>Ensure deterministic and probabilistic matching logic that is transparent, auditable, and measurable</li>
<li>Partner with product and analytics teams to expose identity data through reliable, well-documented APIs and datasets</li>
<li>Build and operate batch and streaming pipelines using modern data stack tools</li>
<li>Create clear documentation, standards, and runbooks for identity and governance systems</li>
<li>Own data governance foundations including data lineage, quality checks, schema enforcement, and access controls</li>
<li>Implement privacy-by-design principles (PII handling, consent enforcement, retention policies)</li>
<li>Collaborate with legal, privacy, and security teams to operationalize regulatory requirements (e.g., GDPR, CCPA)</li>
<li>Establish monitoring and alerting for data quality, freshness, and integrity</li>
</ul>
<p>What we&#39;re looking for:</p>
<ul>
<li>Production data engineering experience</li>
<li>Bachelor’s degree in computer science, related field or equivalent experience</li>
<li>Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala</li>
<li>Experience in delivering significant technical initiatives and building reliable, large scale services</li>
<li>Experience in delivering APIs backed by relationship-heavy datasets</li>
<li>Experience implementing data governance practices, including data quality, metadata management, and access controls</li>
<li>Strong understanding of privacy-by-design principles and handling of sensitive or regulated data</li>
<li>Familiarity with data lakes, cloud warehouses, and storage formats</li>
<li>Strong proficiency in AWS services</li>
<li>Excellent written and verbal communication skills</li>
<li>Successful design and implementation of scalable and efficient data infrastructure</li>
<li>High attention to detail in implementation of automated data quality checks</li>
<li>Effective collaboration with cross-functional teams</li>
<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs</li>
<li>Strong track record of critical evaluation and verification of AI-assisted work (e.g., testing, source-checking, data validation, peer review)</li>
<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$177,185-$364,795 USD</Salaryrange>
      <Skills>Spark, Scala, Data Engineering, Identity Resolution, Privacy, Platform Reliability, Data Governance, Data Lineage, Quality Checks, Schema Enforcement, Access Controls, AWS Services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>tvScientific</Employername>
      <Employerlogo>https://logos.yubhub.co/tvscientific.com.png</Employerlogo>
      <Employerdescription>tvScientific is a technology company that provides a CTV advertising platform for performance marketers.</Employerdescription>
      <Employerwebsite>https://www.tvscientific.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7642253</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>22ff82ac-40b</externalid>
      <Title>Software Engineer, Research Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for engineers who love working directly with users and who excel at building data products. The Research Data Platform team builds the tools that Anthropic&#39;s researchers use every day to manage, query, and analyze the data that goes into training and evaluating frontier models.</p>
<p>As a software engineer on this team, you will:</p>
<ul>
<li>Build and operate data pipelines that extract data from research training runs and land it in storage systems that are easy and fast to query</li>
<li>Work closely with researchers to design and build APIs, libraries, and web interfaces that support data management, exploration, and analysis</li>
<li>Develop dataset management, data cataloging, and provenance tooling that researchers use in their day-to-day work</li>
<li>Embed with research teams to understand their workflows, identify high-leverage tooling opportunities, and ship solutions quickly</li>
<li>Collaborate with adjacent teams to build on existing systems rather than reinventing them</li>
</ul>
<p>You may be a good fit if you have significant software engineering experience, particularly building data-intensive applications or internal tooling. You should enjoy working directly with users, gathering requirements iteratively, and shipping things that get adopted. You should also be results-oriented, with a bias towards flexibility and impact.</p>
<p>Strong candidates may also have experience with large-scale ETL, columnar storage formats, and query engines, high-volume time series data, data cataloging, lineage, or metadata management systems, ML experiment tracking or metrics platforms, complex data visualization, and full-stack web application development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>software engineering, data-intensive applications, internal tooling, data pipelines, storage systems, APIs, libraries, web interfaces, dataset management, data cataloging, provenance tooling, research workflows, adjacent teams, large-scale ETL, columnar storage formats, query engines, high-volume time series data, lineage, metadata management systems, ML experiment tracking, metrics platforms, complex data visualization, full-stack web application development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191226008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>47483e13-115</externalid>
      <Title>Staff Product Manager - Technical</Title>
      <Description><![CDATA[<p>As a Technical Product Manager, you will work closely with product managers, engineering teams, and technical field organizations to ensure the features we design and ship deliver outstanding user experiences.</p>
<p>You will help shape our transactional database capabilities to meet the performance, reliability, and scalability requirements of modern applications and AI agents, or you will help ensure data assets are governed effectively, enabling controlled access, compliance, and visibility across the organization.</p>
<p>This role requires you to deeply understand both functional and non-functional requirements, such as performance, scalability, security, and compliance and how customers meet these requirements today. You will evaluate how these workloads are implemented on the Databricks Data Intelligence Platform and identify opportunities to improve the product experience.</p>
<p>You will act as a bridge between technical field teams and product and engineering. Insights from customer PoCs, benchmarks, and real-world implementations will directly inform product decisions. You will also help ensure that product improvements are clearly communicated back to the field.</p>
<p>The impact you will have:</p>
<ul>
<li>Identify and drive impactful product improvements in your domain of expertise</li>
</ul>
<ul>
<li>Define and run performance benchmarks (OLTP focus) or governance best practices and reference architectures (governance focus)</li>
</ul>
<ul>
<li>Shape and prioritize a meaningful product roadmap</li>
</ul>
<ul>
<li>Support go-to-market efforts and guide product adoption</li>
</ul>
<ul>
<li>For governance focus: define processes and mechanisms for how AI agents securely and compliantly access the Databricks Data Intelligence Platform</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years of experience with a strong, hands-on technical background</li>
</ul>
<ul>
<li>Strong empathy for customers across full spectrum of Data Platform users</li>
</ul>
<ul>
<li>Deep domain expertise in one of the following:</li>
</ul>
<ul>
<li>Transactional databases (OLTP), cloud-native databases, or distributed systems</li>
</ul>
<ul>
<li>Data governance, data catalogs, lineage, and access management</li>
</ul>
<ul>
<li>Experience evaluating and comparing technologies across dimensions such as performance, reliability, governance, and compliance</li>
</ul>
<ul>
<li>Strong Python and SQL skills</li>
</ul>
<ul>
<li>Experience using AI-assisted development tools</li>
</ul>
<ul>
<li>Experience with systems design and architecture</li>
</ul>
<ul>
<li>Proven ability to work effectively across product, engineering, and technical field teams</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Transactional databases, Cloud-native databases, Distributed systems, Data governance, Data catalogs, Lineage, Access management, Python, SQL, AI-assisted development tools, Systems design and architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8394060002</Applyto>
      <Location>Amsterdam, Netherlands</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ceba9e5b-250</externalid>
      <Title>Senior Backend Engineer, Product and Infra</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Backend Engineer to build the systems and services that power our product experience. You&#39;ll own the backend infrastructure that makes our content discoverable, our features responsive, and our platform reliable at scale.</p>
<p>Your work will directly shape what users experience: designing APIs that serve rich content, building services that handle real-time interactions, implementing content-matching systems for rights and safety, and ensuring our platform performs under load. You&#39;ll architect systems that are fast, correct, and maintainable.</p>
<p>You&#39;ll collaborate closely with Product, ML Research, and Mobile/Web teams to ship features that matter. We use Python, Go, BigQuery, Pub/Sub, and a microservices architecture,but we care more about good judgment than specific tool experience.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and maintain application-level data models that organize rich content into canonical structures optimized for product features, search, and retrieval.</li>
<li>Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.</li>
<li>Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.</li>
<li>Implement and refine fingerprinting pipelines used for deduplication, rights attribution, safety checks, and provenance validation.</li>
<li>Own data consistency between ingestion systems, application surfaces, metadata storage, and downstream reporting environments.</li>
<li>Define and track key operational metrics, including latency, completeness, accuracy, and event health.</li>
<li>Collaborate with Product teams to ensure content structures and APIs support evolving features and high-quality user experiences.</li>
<li>Partner with Analytics and Research teams to deliver clean usage datasets for experimentation, model evaluation, reporting, and internal insights.</li>
<li>Operate large analytical workloads in BigQuery and build reusable Dataflow/Beam components for structured processing.</li>
<li>Improve reliability and scale by designing robust schema evolution strategies, idempotent pipelines, and well-instrumented operational flows.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building production backend services and APIs at scale</li>
<li>Experience building ETL/ELT pipelines, event processing systems, and structured data models for applications or analytics</li>
<li>Strong background in data modeling, metadata systems, indexing, or building canonical representations for heterogeneous content</li>
<li>Proficiency in Python, Go, SQL, and scalable data-processing frameworks (Dataflow/Beam, Spark, or similar)</li>
<li>Familiarity with BigQuery or other analytical data warehouses and strong comfort optimizing large queries and schemas</li>
<li>Experience with event-driven architectures, Pub/Sub, or Kafka-like systems</li>
<li>Strong understanding of data quality, schema evolution, lineage, and operational reliability</li>
<li>Ability to design pipelines that balance cost, latency, correctness, and scale</li>
<li>Clear communication skills and an ability to collaborate closely with Product, Research, and Analytics stakeholders</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience building application-facing APIs or microservices that expose structured content</li>
<li>Background in information retrieval, indexing systems, or search infrastructure</li>
<li>Experience with fingerprinting, perceptual hashing, audio similarity metrics, or content-matching algorithms</li>
<li>Familiarity with ML workflows and how downstream analytics and usage data feed back into research pipelines</li>
<li>Understanding of batch + streaming architectures and how to blend them effectively</li>
<li>Experience with Go, Next.js, or React Native for occasional full-stack contributions</li>
</ul>
<p><strong>Why Join Us</strong></p>
<p>You will design the core data services and pipelines that power our product experience, analytics, and business operations. You’ll work on high-impact data challenges involving real-time signals, large-scale metadata systems, and cross-platform consistency. You’ll join a small, fast-moving team where you’ll shape the structure, reliability, and intelligence of our downstream data ecosystem.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Highly competitive salary and equity</li>
<li>Quarterly productivity budget</li>
<li>Flexible time off</li>
<li>Fantastic office location in Manhattan</li>
<li>Productivity package, including ChatGPT Plus, Claude Code, and Copilot</li>
<li>Top-notch private health, dental, and vision insurance for you and your dependents</li>
<li>401(k) plan options with employer matching</li>
<li>Concierge medical/primary care through One Medical and Rightway</li>
<li>Mental health support from Spring Health</li>
<li>Personalized life insurance, travel assistance, and many other perks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $220,000</Salaryrange>
      <Skills>Python, Go, BigQuery, Pub/Sub, Data modeling, Metadata systems, Indexing, Canonical representations, ETL/ELT pipelines, Event processing systems, Structured data models, Scalable data-processing frameworks, Analytical data warehouses, Event-driven architectures, Kafka-like systems, Data quality, Schema evolution, Lineage, Operational reliability, Application-facing APIs, Microservices, Information retrieval, Indexing systems, Search infrastructure, Fingerprinting, Perceptual hashing, Audio similarity metrics, Content-matching algorithms, ML workflows, Batch + streaming architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Udio</Employername>
      <Employerlogo>https://logos.yubhub.co/udio.com.png</Employerlogo>
      <Employerdescription>Udio is a technology company that powers product experiences.</Employerdescription>
      <Employerwebsite>https://www.udio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/udio/jobs/4987729008</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1c431665-20b</externalid>
      <Title>Data Governance and Management Lead</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto. We are seeking a Data Governance &amp; Management Lead within the Global Analytics team to help develop and implement data controls, data quality standards, and governance practices across the platform.</p>
<p>This role supports data integrity, metadata, and access controls to help ensure data is accurate, consistent, and fit for purpose. This is a hands-on role that requires strong technical fluency, structured problem-solving, and the ability to translate governance requirements into practical implementations within data systems.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Working knowledge of data governance, data management, and data quality frameworks</li>
<li>Experience supporting the implementation of data controls within data pipelines and reporting systems</li>
<li>Advanced proficiency in SQL, Python, or other data query and analysis tools</li>
<li>Proficiency with business intelligence and data visualization tools such as Looker, Power BI, or Tableau</li>
<li>Experience with database design, including understanding complex data schemas and data extraction</li>
<li>Familiarity with data lineage, metadata management, and data modeling concepts</li>
<li>Ability to define and implement data quality rules and validation checks</li>
<li>Understanding of data access principles, including role-based access and data classification</li>
<li>Ability to document data processes and controls clearly and in a structured way</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Oversee the data governance program, identify improvement areas, and implement best practices to enhance data quality, integrity, and security</li>
<li>Develop and implement data quality standards and monitoring processes, including establishing data quality metrics and thresholds</li>
<li>Assist in managing the data issue lifecycle, including tracking and supporting remediation efforts</li>
<li>Manage the data governance platform (Atlan) and serve as the primary subject matter expert</li>
<li>Assist in data classification efforts, including identifying and categorizing sensitive data and critical data elements</li>
<li>Manage external data requests, including regulatory inquiries, ensuring compliance with banking regulations</li>
<li>Monitor and report on key data governance metrics and KPIs, providing insights and recommendations to senior management</li>
<li>Lead data governance meetings and workshops, facilitating discussions and decision-making to drive the data governance program forward</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Have a deep understanding of Anchorage Digital’s strategy and business lines.</li>
<li>Understand how data supports decision-making and operational processes across the organization</li>
<li>Possess strategic thinking and vision, with the ability to develop and implement a comprehensive data governance strategy aligned with organizational goals and objectives</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Able to communicate complex issues clearly and credibly to a wide range of audiences.</li>
<li>Document data processes, controls, and findings clearly for internal stakeholders</li>
<li>Build effective relationships and rapport with stakeholders, including cross-functional and external partners</li>
<li>Communicate, organize, and execute cross-team goals and projects, leveraging relationships and resources to solve problems</li>
<li>Collaborate with Data Platform, InfoSec, Product, and Engineering partners</li>
</ul>
<p><strong>You may be a fit for this role if you have:</strong></p>
<ul>
<li>Bachelor’s degree required. Advanced degrees or certifications in data analytics or governance preferred</li>
<li>4–7 years of experience in data governance, data management, data quality, or data analytics</li>
<li>Hands-on experience implementing or supporting data quality and governance practices</li>
<li>Experience managing data classification, access controls, and external data requests</li>
<li>Experience working with data pipelines, reporting systems, or analytical datasets</li>
<li>Experience writing, editing, or reviewing technical documentation for regulatory or banking contexts</li>
<li>Strong attention to detail, with a focus on accuracy, completeness, and consistency in data governance processes and controls</li>
<li>Ability to work independently on defined tasks and contribute to team objectives</li>
<li>Strong problem-solving skills and comfort working in structured, detail-oriented environments</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>You&#39;ve kept up to date with the proliferation of blockchain and crypto innovations.</li>
<li>You were emotionally moved by the soundtrack to Hamilton, which chronicles the founding of a new financial system. :)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data governance, data management, data quality frameworks, SQL, Python, Looker, Power BI, Tableau, database design, data lineage, metadata management, data modeling, data access principles, role-based access, data classification</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/5bfbd64c-933e-418c-9c07-5aea50212c0d</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>d9dee25d-ca8</externalid>
      <Title>Regulatory Reporting Program Manager, Stablecoin</Title>
      <Description><![CDATA[<p>As a Regulatory Reporting Program Manager, you will support the Global Regulatory Reporting team by partnering with Legal, Compliance, Accounting, Business/Product, and Data Analytics teams across Stripe to maintain Stripe&#39;s NORAM regulatory reporting program.</p>
<p>This may include understanding and documenting the applicable regulatory reporting requirements in the region, implementing systems and processes for comprehensively tracking those requirements for each of Stripe&#39;s North American entities, maintaining the end-to-end processes for the collation of data, production of reports, and continuously monitoring compliance to meet the expectations of Stripe&#39;s regulators.</p>
<p>You will need to be comfortable straddling both the technology and financial services worlds every day, enjoying the puzzle of dealing with that and seeking creative solutions and moving quickly, often in the face of ambiguity.</p>
<p>Responsibilities:</p>
<ul>
<li>Own end-to-end U.S. regulatory reporting program for digital assets including stablecoin related financial activities, including defining reporting scope, governance, timelines, and accountability across required regulatory filings.</li>
<li>Interpret U.S. regulatory reporting requirements applicable to stablecoins, digital assets, payments, and custody, and translate them into clear reporting specifications, data definitions, and execution plans in partnership with Legal and Compliance.</li>
<li>Manage the full regulatory reporting lifecycle, from data sourcing and aggregation through validation, internal review, sign-off, and timely submission to regulators.</li>
<li>Ensure regulatory reports accurately reflect stablecoin-specific activities and risks, including issuance, redemption, circulation, reserves, custody arrangements, and transaction flows across on-chain and off-chain systems.</li>
<li>Design and maintain a robust regulatory reporting control framework, including data quality checks, reconciliations, documentation, and issue remediation to support audit and exam readiness.</li>
<li>Partner with Engineering, Data, Finance, Compliance and Legal to improve data lineage, transparency, and automation across regulatory reporting processes as the business scales.</li>
<li>Own regulatory reporting change management, including assessing the impact of new or evolving stablecoin regulations, product launches, and system changes on reporting scope, data requirements, and controls.</li>
<li>Develop and maintain regulator-ready documentation, including reporting methodologies, assumptions, data lineage, and process documentation to support supervisory reviews and examinations.</li>
<li>Serve as the primary point of contact for regulatory reporting matters during U.S. regulatory exams, audits, and regulatory inquiries.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>U.S. compliance and regulatory obligations, Stablecoin issuance, Payments, Custody-related activities, Regulatory reporting requirements, Data sourcing and aggregation, Validation, Internal review, Sign-off, Timely submission to regulators, Data quality checks, Reconciliations, Documentation, Issue remediation, Data lineage, Transparency, Automation, Regulatory reporting change management, Regulator-ready documentation, Stablecoins, Digital assets, Fintech platforms, Regulatory reporting for banks, Trust companies, Payment institutions, Money services businesses, Reporting automation, Data pipelines, Reporting tools</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses that provides regulated payments and financial services products.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7650177</Applyto>
      <Location>SEA, SF</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>e8d98a5b-1ea</externalid>
      <Title>AI &amp; ML Engineer</Title>
      <Description><![CDATA[<p>About Charlotte Tilbury Beauty</p>
<p>Founded by British makeup artist and beauty entrepreneur Charlotte Tilbury MBE in 2013, Charlotte Tilbury Beauty has revolutionised the face of the global beauty industry.</p>
<p>The AI &amp; ML Engineering team accelerates the adoption of AI across the business, championing innovation while ensuring our machine learning products are robust, scalable, and cost-efficient.</p>
<p>Responsibilities:</p>
<p>Partner with stakeholders to scope problems and identify the right solution - whether leveraging existing AI tools or building custom workflows &amp; solutions.</p>
<p>Design and implement agentic systems using techniques spanning RAG, grounding, prompt engineering, and orchestration on a GCP-first stack.</p>
<p>Build and maintain production ML pipelines and services for non-GenAI use cases (e.g. recommender systems, customer segmentation models, marketing optimisation modules, leveraging supervised, unsupervised and/or econometric modelling approaches).</p>
<p>Develop APIs and microservices for AI/ML solutions, ensuring security, scalability, and observability.</p>
<p>Implement CI/CD for ML services, writing infrastructure as code, and monitoring for model/data drift and performance.</p>
<p>Establish robust guardrails for safe AI usage, including prompt security, practical evaluation frameworks, and compliance with privacy regulations.</p>
<p>Drive and evangelize best practices, reusable templates, and documentation to scale AI/ML delivery across the business.</p>
<p>Collaborate with data engineers, data scientists, front &amp; back-end engineers, product managers, legal &amp; infosec colleagues to deliver impactful solutions end-to-end.</p>
<p>Who you will work with</p>
<p>The AI &amp; ML Engineer Lead and the wider data team.</p>
<p>About you</p>
<p>The role requires a blend of technical depth and product sense, including:</p>
<p>Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.</p>
<p>Strong Python engineering skills (FastAPI, testing, typing) and experience with cloud-native development (GCP preferred).</p>
<p>Hands-on experience with GCP Vertex AI (model endpoints, pipelines, embeddings, vector search) or equivalent cloud-native ML platforms (e.g. AWS SageMaker, Azure ML) and agent orchestration frameworks such as LangChain and LangGraph.</p>
<p>Solid understanding of MLOps - CI/CD, IaC (Terraform), experiment tracking, model registry, and monitoring.</p>
<p>Proven experience deploying and operating ML systems in production (batch and real-time).</p>
<p>Familiarity with RAG architectures, prompt engineering, and evaluation techniques.</p>
<p>Strong grasp of security, privacy, and governance principles (IAM, secrets, PII handling).</p>
<p>Excellent communication skills and ability to work with non-technical stakeholders.</p>
<p>In addition to the above, we would LOVE if you have:</p>
<p>Experience with vector databases and retrieval strategies.</p>
<p>Knowledge of recommender systems and ranking models.</p>
<p>Familiarity with LLM evaluation tools (e.g., RAGAS, TruLens, LangSmith, Arize).</p>
<p>Exposure to feature stores, data lineage, and observability stacks.</p>
<p>Experience in e-commerce or retail environments.</p>
<p>Demonstrable ability to weigh up build/build/configure decisions in the LLM space.</p>
<p>Why join us?</p>
<p>Be a part of this values driven, high growth, magical journey with an ultimate vision to empower everyone, everywhere to be the best version of themselves.</p>
<p>We’re a hybrid model with flexibility, allowing you to work how best suits you.</p>
<p>25 days holiday (plus bank holidays) with an additional day to celebrate your birthday.</p>
<p>Inclusive parental leave policy that supports all parents and carers throughout their parenting and caring journey.</p>
<p>Financial security and planning with our pension and life assurance for all.</p>
<p>Wellness and social benefits including Medicash, Employee Assist Programs and regular social connects with colleagues.</p>
<p>Bring your furry friend to work with you on our allocated dog friendly days and spaces.</p>
<p>And not to forget our generous product discount and gifting!</p>
<p>At Charlotte Tilbury Beauty, our mission is to empower everybody in the world to be the most beautiful version of themselves.</p>
<p>We celebrate and support this by encouraging and hiring people with diverse backgrounds, cultures, voices, beliefs, and perspectives into our growing global workforce.</p>
<p>By doing so, we better serve our communities, customers, employees - and the candidates that take part in our recruitment process.</p>
<p>If you want to learn more about life at Charlotte Tilbury Beauty please follow our LinkedIn page!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, GCP, Vertex AI, LangChain, LangGraph, MLOps, CI/CD, IaC, Experiment tracking, Model registry, Monitoring, Vector databases, Recommender systems, Ranking models, LLM evaluation tools, Feature stores, Data lineage, Observability stacks, E-commerce, Retail environments</Skills>
      <Category>Engineering</Category>
      <Industry>Beauty</Industry>
      <Employername>Charlotte Tilbury Beauty</Employername>
      <Employerlogo>https://logos.yubhub.co/charlottetilbury.com.png</Employerlogo>
      <Employerdescription>A global beauty company founded by British makeup artist and beauty entrepreneur Charlotte Tilbury MBE in 2013, with over 2,300 employees globally.</Employerdescription>
      <Employerwebsite>https://www.charlottetilbury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/243770B17B</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>2a56a653-c18</externalid>
      <Title>Palantir Engineer Specialist - Sr. Consultant - Principal</Title>
      <Description><![CDATA[<p><strong>Palantir Engineer Specialist</strong></p>
<p><strong>Sr. Consultant - Principal</strong></p>
<p><strong>London</strong></p>
<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organisation allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>
<p><strong>About Your Role</strong></p>
<p>As a <strong>Senior Consultant / Principal Consultant – Palantir Engineer</strong>, you lead and deliver end-to-end, data-driven solutions using <strong>Palantir Foundry</strong> in complex client environments. You operate at the intersection of engineering, data, and consulting, working closely with business and technical stakeholders to translate complex problems into scalable, production-ready solutions. You combine strong hands-on technical skills with a consulting mindset, taking ownership of solution design, implementation, and adoption across organisations.</p>
<p><strong>Your role will include:</strong></p>
<ul>
<li>Own the <strong>end-to-end delivery</strong> of Palantir Foundry–based solutions, from problem definition to production</li>
<li>Design and implement <strong>data pipelines and transformations</strong> across diverse data sources</li>
<li>Model data using <strong>Foundry Ontology</strong> concepts to support analytics and operational use cases</li>
<li>Build scalable, reliable solutions using <strong>Python, SQL, and PySpark</strong> within Foundry</li>
<li>Collaborate closely with business stakeholders to define requirements, success metrics, and roadmaps</li>
<li>Support <strong>prototyping, productionisation, and scaling</strong> of data-driven applications</li>
<li>Ensure solutions meet requirements for <strong>data quality, governance, security, and performance</strong></li>
<li>Act as a technical advisor within project teams and contribute to best practices</li>
</ul>
<p><strong>Requirements</strong></p>
<p><strong>What you bring – required</strong></p>
<p><strong>Experience &amp; Seniority</strong></p>
<ul>
<li>Proven experience as a <strong>Senior Consultant or Principal Consultant</strong> in data, analytics, or platform engineering</li>
<li>Strong experience delivering <strong>client-facing data solutions</strong> in complex environments</li>
<li>Ability to take ownership and work independently in ambiguous problem spaces</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong programming skills in <strong>Python</strong> and <strong>SQL</strong>; <strong>PySpark</strong> experience required</li>
<li>Hands-on experience with <strong>Palantir Foundry</strong>, including:</li>
<li>Pipeline Builder / Code Workbook</li>
<li>Data integration and transformation</li>
<li>Ontology modelling and data lineage</li>
<li>Solid understanding of <strong>data architectures</strong>, including data lakes, lakehouses, and data warehouses</li>
<li>Experience working with APIs, databases, and structured / semi-structured data</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience building <strong>scalable ETL/ELT pipelines</strong></li>
<li>Familiarity with <strong>CI/CD concepts</strong>, testing, and production deployments</li>
<li>Strong focus on <strong>solution quality, maintainability, and performance</strong></li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field <strong>or equivalent practical experience</strong></li>
</ul>
<p><strong>Nice to have</strong></p>
<ul>
<li>Experience with <strong>cloud platforms</strong> (AWS, Azure, GCP)</li>
<li>Familiarity with <strong>containerisation</strong> (Docker, Kubernetes)</li>
<li>Prior experience as a <strong>Palantir FDE</strong> or in Foundry-heavy delivery roles</li>
<li>Domain experience in industries such as <strong>Energy, Finance, Public Sector, Healthcare, or Logistics</strong></li>
</ul>
<p><strong>Benefits</strong></p>
<p><strong>About your team</strong></p>
<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognised as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognised by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, PySpark, Palantir Foundry, Pipeline Builder, Code Workbook, Data integration, Data transformation, Ontology modelling, Data lineage, Data architectures, Data lakes, Lakehouses, Data warehouses, APIs, Databases, Structured data, Semi-structured data, ETL/ELT pipelines, CI/CD concepts, Testing, Production deployments, Solution quality, Maintainability, Performance, Bachelor’s degree, Master’s degree, Computer Science, Engineering, Mathematics, Cloud platforms, Containerisation, Palantir FDE, Foundry-heavy delivery roles, Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. The company is a mid-size player within the scale of Infosys, a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/2A8U1ryerVijb4fFAc6i8u/hybrid-palantir-engineer-specialist---sr.-consultant---principal-in-london-at-infosys-consulting---europe</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>56dc9a51-e66</externalid>
      <Title>Principal Consultant - Data Architecture</Title>
      <Description><![CDATA[<p><strong>Principal Consultant - Data Architecture</strong></p>
<p>You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>
<p><strong>About Your Role</strong></p>
<p>As a Principal Data Architecture Consultant, you will act as a senior technical leader in complex data and analytics engagements. You will shape and govern end-to-end enterprise data architectures, lead technical teams, and serve as a trusted technical advisor for clients and internal stakeholders.</p>
<p><strong>Your Role Will Include:</strong></p>
<ul>
<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>
<li>Translate business objectives into scalable, secure, and compliant data solutions</li>
<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>
<li>Guide delivery teams through implementation, rollout, and production readiness</li>
<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>
<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>
<li>Support pre-sales and solution design activities from a technical perspective</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>
<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>
<li>Strong client-facing experience in complex enterprise environments</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong expertise in modern data architectures, including:</li>
<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>
<li>Modern Data Architecture design principles</li>
<li>Batch and streaming data integration patterns</li>
<li>Data Platform, DevOps, deployment and security architectures</li>
<li>Analytics and AI enablement architectures</li>
<li>Hands-on experience with cloud data platforms, e.g.:</li>
<li>Azure, AWS or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>
<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>
<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>
<li>Solid understanding of API-based and event-driven architectures</li>
<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience with data pipelines, orchestration, and automation</li>
<li>Familiarity with CI/CD concepts and production-grade deployments</li>
<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>
</ul>
<p><strong>Data Management &amp; Governance</strong></p>
<ul>
<li>Strong understanding of data management and governance principles, including:</li>
<li>Data quality, metadata, lineage, master data management</li>
<li>Data Management software and tools</li>
<li>Security, access control, and compliance considerations</li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>
<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>
<li>Hands-on Experience with data governance or metadata tools</li>
<li>Cloud, data, or architecture certifications</li>
</ul>
<p><strong>Language &amp; Mobility</strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong>Benefits</strong></p>
<p>You will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>enterprise data architecture, system data integration, data engineering, analytics, modern data architectures, Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, cloud data platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, SQL, relational databases, Postgres, SQL Server, Oracle, NoSQL databases, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, data migration programmes, data pipelines, orchestration, automation, CI/CD concepts, production-grade deployments, distributed systems, Docker, Kubernetes, data management and governance principles, data quality, metadata, lineage, master data management, data management software and tools, security, access control, compliance considerations, Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience, advanced analytics, AI / ML or GenAI, streaming platforms, Kafka, Azure Event Hubs, data governance or metadata tools, cloud, data, architecture certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. It is a mid-size player with a supportive, entrepreneurial spirit.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/hpBWjvvy8D6B1f818cHxZR/remote-principal-consultant---data-architecture-in-poland-at-infosys-consulting---europe</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>01be118d-100</externalid>
      <Title>Palantir Engineer Specialist - Sr. Consultant - Principal</Title>
      <Description><![CDATA[<p><strong><strong>About Your Role</strong></strong></p>
<p>As a Senior Consultant / Principal Consultant – Palantir Engineer, you will lead and deliver end-to-end, data-driven solutions using Palantir Foundry in complex client environments. You will operate at the intersection of engineering, data, and consulting, working closely with business and technical stakeholders to translate complex problems into scalable, production-ready solutions.</p>
<p><strong><strong>Your role will include:</strong></strong></p>
<ul>
<li>Own the end-to-end delivery of Palantir Foundry–based solutions, from problem definition to production</li>
<li>Design and implement data pipelines and transformations across diverse data sources</li>
<li>Model data using Foundry Ontology concepts to support analytics and operational use cases</li>
<li>Build scalable, reliable solutions using Python, SQL, and PySpark within Foundry</li>
<li>Collaborate closely with business stakeholders to define requirements, success metrics, and roadmaps</li>
<li>Support prototyping, productionisation, and scaling of data-driven applications</li>
<li>Ensure solutions meet requirements for data quality, governance, security, and performance</li>
<li>Act as a technical advisor within project teams and contribute to best practices</li>
</ul>
<p><strong><strong>Requirements</strong></strong></p>
<ul>
<li>Proven experience as a Senior Consultant or Principal Consultant in data, analytics, or platform engineering</li>
<li>Strong experience delivering client-facing data solutions in complex environments</li>
<li>Ability to take ownership and work independently in ambiguous problem spaces</li>
</ul>
<p><strong><strong>Core Data &amp; Analytics Technology Skills</strong></strong></p>
<ul>
<li>Strong programming skills in Python and SQL; PySpark experience required</li>
<li>Hands-on experience with Palantir Foundry, including:</li>
<li>Pipeline Builder / Code Workbook</li>
<li>Data integration and transformation</li>
<li>Ontology modelling and data lineage</li>
<li>Solid understanding of data architectures, including data lakes, lakehouses, and data warehouses</li>
<li>Experience working with APIs, databases, and structured / semi-structured data</li>
</ul>
<p><strong><strong>Engineering &amp; Platform Foundations</strong></strong></p>
<ul>
<li>Experience building scalable ETL/ELT pipelines</li>
<li>Familiarity with CI/CD concepts, testing, and production deployments</li>
<li>Strong focus on solution quality, maintainability, and performance</li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>
</ul>
<p><strong><strong>Nice to have</strong></strong></p>
<ul>
<li>Experience with cloud platforms (AWS, Azure, GCP)</li>
<li>Familiarity with containerisation (Docker, Kubernetes)</li>
<li>Prior experience as a Palantir FDE or in Foundry-heavy delivery roles</li>
<li>Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics</li>
</ul>
<p><strong><strong>Language &amp; Mobility</strong></strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong><strong>Benefits</strong></strong></p>
<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice, you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role, you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; engineering, Analytics &amp; Data Science.</p>
<p><strong><strong>About listing company</strong></strong></p>
<p>Infosys Consulting is a globally renowned management consulting firm that is on the front-line of industry disruption. We are a mid-size player with a supportive, entrepreneurial spirit that works with a market-leading brand in every sector, while our parent organization Infosys is a top-5 powerhouse IT brand that is outperforming the market and experiencing rapid growth.</p>
<p>Our consulting business is annually recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths we offer to our consultants. We are committed to fostering an inclusive work culture that inspires everyone to deliver their best.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, PySpark, Palantir Foundry, Pipeline Builder / Code Workbook, Data integration and transformation, Ontology modelling and data lineage, Data architectures, APIs, Databases, Structured / semi-structured data, Cloud platforms, Containerisation, Palantir FDE, Foundry-heavy delivery roles, Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting is a globally renowned management consulting firm that works with market-leading brands across sectors. It is a mid-size player within the scale of Infosys, a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/2u6mMfyRc8Yxg8qmvZBSMX/remote-palantir-engineer-specialist---sr.-consultant---principal-in-poland-at-infosys-consulting---europe</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>448a56f3-ab5</externalid>
      <Title>Director of Data Engineering and Agentic AI Automation, Finance</Title>
      <Description><![CDATA[<p><strong>Director of Data Engineering and Agentic AI Automation, Finance</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Finance</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$347K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>We are looking for a Director of Data Engineering and Agentic AI Automation to lead the next generation of our finance data infrastructure. As OpenAI expands its Finance operations, we need scalable and trustworthy data systems to match the pace and complexity of our growth. This includes well-modeled, auditable data for revenue recognition, financial reporting, and planning, supported by reliable pipelines that connect ERP, planning, and operational systems. You will lead a group of analytics engineers, data engineers, and AI engineers to build the data pipelines that connect our internal engineering systems with enterprise platforms such as Oracle Fusion ERP. This role will also define the roadmap for agentic AI automation, enabling intelligent workflows, process automation, and AI-driven decision-making across Finance.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build and maintain scalable, auditable data infrastructure that powers accurate financial information, with a focus on revenue recognition, compute attribution, and close automation.</li>
</ul>
<ul>
<li>Lead and grow teams of analytics engineers, data engineers, and AI engineers to deliver high-impact, intelligent data systems.</li>
</ul>
<ul>
<li>Guide work across financial close and allocations automation, B2C revenue automation from engineering systems to ERP (including reconciliation with cash and source systems), and other mission-critical financial processes.</li>
</ul>
<ul>
<li>Design and implement data pipelines connecting ERP, planning, and operational systems, including Oracle Fusion, Anaplan, and Workday.</li>
</ul>
<ul>
<li>Build and support scalable, audit-proof architecture that enables reliable financial reporting and compliance.</li>
</ul>
<ul>
<li>Develop data and AI-powered workflows that enhance forecasting accuracy, compliance automation, and operational efficiency.</li>
</ul>
<ul>
<li>Create and maintain data marts and products that support stakeholders across Revenue, FP&amp;A, Tax, Procurement, Hardware Accounting, and Controller teams.</li>
</ul>
<ul>
<li>Define and enforce best practices for data modeling, lineage, observability, and reconciliation across finance data domains.</li>
</ul>
<ul>
<li>Set the technical direction and manage team structure, mentoring engineers and overseeing contractors or system integrators to ensure delivery of high-quality outcomes.</li>
</ul>
<ul>
<li>Partner with senior leaders across Finance, Engineering, and Infrastructure to align on priorities and integrate new automation capabilities.</li>
</ul>
<ul>
<li>Ensure data systems are AI-ready and capable of supporting predictive analytics, autonomous agent workflows, and large-scale automation.</li>
</ul>
<ul>
<li>Own and maintain Tier-1 data pipelines with strict SLA, data quality, and compliance standards.</li>
</ul>
<ul>
<li>Drive the long-term roadmap for agentic AI enablement to build the foundation for “Finance on OpenAI.”</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>12+ years in data engineering, with proven experience building and managing enterprise-scale, auditable ETL pipelines and complex datasets</li>
</ul>
<ul>
<li>Proficiency in SQL and Python, with demonstrated experience in schema design, data modeling, and orchestration frameworks</li>
</ul>
<ul>
<li>Expertise in distributed data processing technologies such as Apache Spark, Kafka, and cloud-native storage (e.g., S3, ADLS)</li>
</ul>
<ul>
<li>Deep knowledge of enterprise data architecture, especially within Finance and Supply Chain</li>
</ul>
<ul>
<li>Familiarity with financial processes (close, allocations, revenue recognition) and supply chain data models (Supply and demand planning, procurement, vendor master), along with experience in ingesting data from internal engineering systems with large volumes of B2C</li>
</ul>
<ul>
<li>Experience integrating with contract manufacturers and external logistics providers is a strong plus</li>
</ul>
<ul>
<li>Strong track record of partnering with senior business stakeholders</li>
</ul>
<p><strong>Work Environment</strong></p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$347K – $490K • Offers Equity</Salaryrange>
      <Skills>SQL, Python, Apache Spark, Kafka, cloud-native storage, data modeling, orchestration frameworks, distributed data processing technologies, enterprise data architecture, financial processes, supply chain data models, ETL pipelines, complex datasets, schema design, data engineering, data infrastructure, auditable data, revenue recognition, financial reporting, planning, ERP, planning, operational systems, Oracle Fusion, Anaplan, Workday, data marts, products, stakeholders, Revenue, FP&amp;A, Tax, Procurement, Hardware Accounting, Controller, data modeling, lineage, observability, reconciliation, finance data domains, team structure, engineers, contractors, system integrators, predictive analytics, autonomous agent workflows, large-scale automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in artificial intelligence. It was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/e84e7b7e-a82e-411e-929a-615dc3080280</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>