<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>8f03ad2d-96f</externalid>
      <Title>Software Engineer, Research Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for engineers who love working directly with users and who excel at building data products. The Research Data Platform team builds the tools that Anthropic&#39;s researchers use every day to manage, query, and analyze the data that goes into training and evaluating frontier models.</p>
<p>As a Software Engineer on the Research Data Platform team, you will:</p>
<ul>
<li>Build and operate data pipelines that extract data from research training runs and land it in storage systems that are easy and fast to query</li>
<li>Work closely with researchers to design and build APIs, libraries, and web interfaces that support data management, exploration, and analysis</li>
<li>Develop dataset management, data cataloging, and provenance tooling that researchers use in their day-to-day work</li>
<li>Embed with research teams to understand their workflows, identify high-leverage tooling opportunities, and ship solutions quickly</li>
<li>Collaborate with adjacent teams to build on existing systems rather than reinventing them</li>
</ul>
<p>We do not require prior ML or AI training experience. If you enjoy working closely with technical users, learning new domains quickly, and building tools people actually want to use, you&#39;ll pick up the research context fast.</p>
<p>Strong candidates may also have experience with large-scale ETL, columnar storage formats, and query engines (e.g., Spark, BigQuery, DuckDB, Parquet), high-volume time series data , ingestion, storage, and efficient querying, data cataloging, lineage, or metadata management systems, or ML experiment tracking or metrics platforms.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>large-scale ETL, columnar storage formats, query engines, high-volume time series data, data cataloging, lineage, metadata management systems, ML experiment tracking, Spark, BigQuery, DuckDB, Parquet</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191226008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3a17bc01-d7d</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>DBT Labs is seeking a Staff Software Engineer to join our Engineering team. As a seasoned engineer, you will architect and build the durable memory substrate that powers agentic analytics workflows. This platform stores not just metadata, but meaning: decisions, intent, rationale, and history , and makes it safely accessible to humans, agents, and applications.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Prototyping apt technical solutions and finding best fits for the context engine.</li>
<li>Architecting and building the core Context Platform.</li>
<li>Designing schemas and primitives for Decision Memory and enterprise context.</li>
<li>Owning context storage systems (graph, vector, event/time-based).</li>
<li>Building read/write/query APIs used by agents, products, and external apps.</li>
<li>Designing permission-aware, auditable context access.</li>
</ul>
<p>You will be working closely with agentic systems engineers and product leadership to ensure the context engine is interoperable, portable, and zero-lock-in by design.</p>
<p>In this role, you will own:</p>
<ul>
<li>Context schemas and schema evolution strategies.</li>
<li>Storage and data modeling choices.</li>
<li>Platform APIs and interfaces.</li>
<li>Security, identity propagation, and audit foundations.</li>
<li>Long-term scalability and correctness of context data.</li>
</ul>
<p>You will not own:</p>
<ul>
<li>Agent behavior or orchestration logic.</li>
<li>Business rules or governance policy decisions.</li>
<li>Product UI or workflow automation.</li>
</ul>
<p>The ideal candidate will have significant experience building distributed systems, data platforms, or infrastructure, and will be comfortable operating in ambiguous, greenfield problem spaces. They will also have deep expertise in data modeling and schema design, experience designing shared platforms used by many teams, and strong instincts around APIs, contracts, and backward compatibility.</p>
<p>Nice to have experience with knowledge graphs, metadata systems, or search/retrieval systems, experience building systems with governance, auditability, or compliance requirements, and familiarity with dbt or modern analytics stacks or developer tooling.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Distributed systems, Data platforms, Infrastructure, Data modeling, Schema design, APIs, Contracts, Backward compatibility, Knowledge graphs, Metadata systems, Search/retrieval systems, dbt, Modern analytics stacks, Developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4661362005</Applyto>
      <Location>India - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58a44dab-91a</externalid>
      <Title>Partner Solutions Architect - Japan</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across Japan. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>
<p>You will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud.</p>
<p>Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing. This is not a purely reactive enablement role. The Partner SA is expected to help shape and execute repeatable partner plays that create revenue.</p>
<p>That includes enabling partner sellers and architects, supporting account mapping and seller-to-seller engagement, helping define joint value propositions, supporting partner-led pipeline generation, and influencing product and field strategy based on what is learned in-market.</p>
<p>Internal operating docs show this motion consistently includes enablement sessions, QBR sponsorships, account planning, workshops, field events, and targeted campaigns designed to produce sourced and influenced pipeline.</p>
<p>You&#39;ll be part of a team helping dbt scale its ecosystem through better partner capability, tighter field alignment, and more repeatable pipeline generation. The role is especially important as dbt continues investing in structured partner motions and deeper engagement with major cloud and data platform partners.</p>
<p>What you&#39;ll do:</p>
<ul>
<li>Partner closely with Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>
</ul>
<ul>
<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>
</ul>
<ul>
<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>
</ul>
<ul>
<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>
</ul>
<ul>
<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>
</ul>
<ul>
<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>
</ul>
<ul>
<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>
</ul>
<ul>
<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>
</ul>
<ul>
<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>
</ul>
<ul>
<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>
</ul>
<ul>
<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>
</ul>
<ul>
<li>Travel approximately 30-40% to support partner planning, enablement, executive meetings, and field events across Japan</li>
</ul>
<p>This scope reflects how the Partner SA team is already operating: enabling partner field teams, building account-level alignment, supporting QBRs and regional events, and translating those activities into sourced and engaged pipeline.</p>
<p>What you&#39;ll need:</p>
<ul>
<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>
</ul>
<ul>
<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>
</ul>
<ul>
<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>
</ul>
<ul>
<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>
</ul>
<ul>
<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>
</ul>
<ul>
<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>
</ul>
<ul>
<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>
</ul>
<ul>
<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>
</ul>
<ul>
<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>
</ul>
<ul>
<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>
</ul>
<p>What will make you stand out:</p>
<ul>
<li>Experience working directly in partner, alliance, or ecosystem roles</li>
</ul>
<ul>
<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>
</ul>
<ul>
<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>
</ul>
<ul>
<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>
</ul>
<ul>
<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>
</ul>
<ul>
<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>
</ul>
<ul>
<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>
</ul>
<ul>
<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>
</ul>
<p>What to expect in the interview process (all video interviews unless accommodations are needed):</p>
<ul>
<li>Interview with Talent Acquisition Partner</li>
</ul>
<ul>
<li>Interview with Hiring Manager</li>
</ul>
<ul>
<li>Team Interviews</li>
</ul>
<ul>
<li>Demo Round</li>
</ul>
<p>#LI-LA1</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner engineering, customer-facing technical role, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673657005</Applyto>
      <Location>Japan - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>22ff82ac-40b</externalid>
      <Title>Software Engineer, Research Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for engineers who love working directly with users and who excel at building data products. The Research Data Platform team builds the tools that Anthropic&#39;s researchers use every day to manage, query, and analyze the data that goes into training and evaluating frontier models.</p>
<p>As a software engineer on this team, you will:</p>
<ul>
<li>Build and operate data pipelines that extract data from research training runs and land it in storage systems that are easy and fast to query</li>
<li>Work closely with researchers to design and build APIs, libraries, and web interfaces that support data management, exploration, and analysis</li>
<li>Develop dataset management, data cataloging, and provenance tooling that researchers use in their day-to-day work</li>
<li>Embed with research teams to understand their workflows, identify high-leverage tooling opportunities, and ship solutions quickly</li>
<li>Collaborate with adjacent teams to build on existing systems rather than reinventing them</li>
</ul>
<p>You may be a good fit if you have significant software engineering experience, particularly building data-intensive applications or internal tooling. You should enjoy working directly with users, gathering requirements iteratively, and shipping things that get adopted. You should also be results-oriented, with a bias towards flexibility and impact.</p>
<p>Strong candidates may also have experience with large-scale ETL, columnar storage formats, and query engines, high-volume time series data, data cataloging, lineage, or metadata management systems, ML experiment tracking or metrics platforms, complex data visualization, and full-stack web application development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>software engineering, data-intensive applications, internal tooling, data pipelines, storage systems, APIs, libraries, web interfaces, dataset management, data cataloging, provenance tooling, research workflows, adjacent teams, large-scale ETL, columnar storage formats, query engines, high-volume time series data, lineage, metadata management systems, ML experiment tracking, metrics platforms, complex data visualization, full-stack web application development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191226008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1b5b24ef-246</externalid>
      <Title>Engineering Manager II, Programmatic Offsite Ads</Title>
      <Description><![CDATA[<p>About Pinterest</p>
<p>We&#39;re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product. Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other&#39;s unique experiences and embrace the flexibility to do your best work.</p>
<p>Creating a career you love? It&#39;s Possible. At Pinterest, AI isn&#39;t just a feature, it&#39;s a powerful partner that augments our creativity and amplifies our impact, and we’re looking for candidates who are excited to be a part of that.</p>
<p>To get a complete picture of your experience and abilities, we’ll explore your foundational skills and how you collaborate with AI. Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think.</p>
<p>You can read more about our AI interview philosophy and how we use AI in our recruiting process here.</p>
<p>Job Summary</p>
<p>We’re seeking a talented Manager II, Engineering to take on a leadership role within the Programmatic Offsite Ads team. You will lead critical efforts to define, build, and evolve the ad features which power Pinterest’s ads business through off-platform supply partnerships.</p>
<p>Responsibilities</p>
<p>In this pivotal role, you will take on the challenge of defining and executing the offsite ads strategy for programmatic ads at Pinterest.</p>
<p>Own the end-to-end strategy and roadmap for driving programmatic off-platform ads delivery, driving high-quality outcomes which meet advertiser expectations.</p>
<p>Partner closely with Product, Design, Research, Sales, Policy, and the broader Monetization org to define new product features, and advertising experiences that balance user delight, advertiser outcomes, and platform integrity.</p>
<p>Lead experimentation and optimization of advertising campaigns, using A/B testing and rigorous measurement (e.g., viewability, engagement, conversion, advertiser performance, user sentiment) to drive continuous improvement.</p>
<p>Work with external supply partners to ensure our off-platform ads are well-supported in the programmatic ecosystem, and that Pinterest’s creatives adhere to performance standards.</p>
<p>Collaborate with serving, infra, and ML teams to ensure that programmatic ads are backed by robust infrastructure, measurement, and policies.</p>
<p>Lead mission-critical initiatives involving 8-10 engineers across backend and frontend stacks, and directly influence their day-to-day work through mentorship, coaching, and clear technical direction.</p>
<p>Build and maintain a culture of inclusivity, craft, and operational excellence within the Programmatic Offsite Ads team.</p>
<p>Collaborate with stakeholders and partner teams across the organization to architect data lake storage and metadata management technologies to unlock big data and ML/AI innovations.</p>
<p>Use AI to accelerate analysis, iteration, experimentation and time to market while applying judgment and verification to ensure correctness and quality.</p>
<p>Requirements</p>
<p>BS (or higher) degree in Computer Science, or a related field.</p>
<p>2-3+ years of relevant engineering management experience.</p>
<p>3-4+ years of relevant industry experience within the ads domain.</p>
<p>Experience designing or delivering high scale, real time distributed systems.</p>
<p>Working knowledge of programmatic advertising and OpenRTB (DSPs/SSPs, auctions, targeting, measurement), and experience partnering with external platforms.</p>
<p>Proven track record partnering with Product and Design to define new product features, run experiments, and use data to iterate on performance outcomes.</p>
<p>Rich experience working cross-functionally to drive alignment, oversee execution, and secure deliverables across Product, Design, ML, Infra, Sales, and external partners.</p>
<p>Build storage capabilities that efficiently support large-scale ML/AI workloads, including high-throughput data access, schema evolution, and large-scale column backfills.</p>
<p>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs.</p>
<p>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables.</p>
<p>Experience mentoring, guiding, and upleveling engineers, including senior ICs.</p>
<p>Strong communication skills and the ability to articulate product strategy and tradeoffs to both technical and non-technical stakeholders.</p>
<p>Strong commitment to building inclusive teams and fostering a sense of belonging.</p>
<p>In-Office Requirement Statement:</p>
<p>We let the type of work you do guide the collaboration style. That means we&#39;re not always working in an office, but we continue to gather for key moments of collaboration and connection.</p>
<p>This role will need to be in the office for in-person collaboration [1 time per week] and therefore needs to be in a commutable distance from one of the following offices: San Francisco.</p>
<p>Relocation Statement:</p>
<p>This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.</p>
<p>#LI-HYBRID #LI-KBF</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$177,185-$364,795 USD</Salaryrange>
      <Skills>Computer Science, Engineering Management, Programmatic Advertising, OpenRTB, Distributed Systems, Data Lake Storage, Metadata Management, AI, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference. It has millions of active users worldwide.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7494773</Applyto>
      <Location>San Francisco, CA, US; Palo Alto, CA, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3168d7d3-70b</externalid>
      <Title>Partner Solutions Architect - North America</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across North America. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>
<p>As a Partner Solutions Architect, you will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud. Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing.</p>
<p>Responsibilities</p>
<ul>
<li>Partner closely with North America Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>
<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>
<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>
<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>
<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>
<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>
<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>
<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>
<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>
<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>
<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>
<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>
<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>
<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>
<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>
<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>
<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>
<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>
<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>
<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>
</ul>
<p>What will make you stand out</p>
<ul>
<li>Experience working directly in partner, alliance, or ecosystem roles</li>
<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>
<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>
<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>
<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>
<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>
<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>
<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation (and yes we use it!)</li>
<li>Pension coverage</li>
<li>Excellent healthcare</li>
<li>Paid Parental Leave</li>
<li>Wellness stipend</li>
<li>Home office stipend, and more!</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner development, field engineering, sales engineering, consulting, partner engineering, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a software company that provides an analytics engineering platform used by over 90,000 teams every week, driving data transformations and AI use cases. As of February 2025, they have surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673630005</Applyto>
      <Location>Canada - Remote; US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc0c258f-1f6</externalid>
      <Title>Engineering Manager II, Enterprise AI Solutions</Title>
      <Description><![CDATA[<p>We are seeking a Business Savvy Engineering Manager to help define the future of Corporate IT&#39;s AI-based future at Pinterest. Working closely with cross-functional engineering teams and business leaders, you will lead a nimble team playing a pivotal role in scaling Corporate IT&#39;s engineering department.</p>
<p>As an Engineering Manager, you will guide your team in designing and building the solutions that make our business partners&#39; jobs easier, faster, and more capable. You will grow and empower engineers while shaping how we build Pinterest&#39;s AI future.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead a team of employees and contractors focused on solving business problems using AI tools.</li>
<li>Work closely with the existing software engineering teams to develop a seamless and low-friction client experience.</li>
<li>Mentor junior engineers to help them grow and develop into the best that they can be.</li>
<li>Motivate and lead your team to show up every day and do their best work.</li>
<li>Collaborate with stakeholders and partner teams across the organization to architect data lake storage and metadata management technologies to unlock big data and ML/AI innovations.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years of experience leading and growing engineering teams, with a strong hands-on background in Python.</li>
<li>7+ years of industry experience designing, building, and operating scalable, highly available backend systems, including owning production-grade infrastructure at scale.</li>
<li>Proficiency in designing and delivering AI-based solutions that solve real-world business problems.</li>
<li>Understanding of business unit challenges and problems, focused on Finance, Accounting, Legal, Sales, and Marketing.</li>
<li>Experience with cloud infrastructure on AWS and containerized services using Docker and Kubernetes.</li>
<li>Demonstrated technical leadership and people management experience, including setting team vision and long-term roadmap, mentoring and growing engineers across all levels, driving day-to-day execution and engineering alignment, and partnering cross-functionally to deliver complex, high-impact platform investments.</li>
<li>Demonstrated ability to use AI to accelerate team execution, system design, and decision-making, paired with sound judgment in validating outputs, maintaining quality, and taking ownership of final outcomes.</li>
<li>Build storage capabilities that efficiently support large-scale ML/AI workloads, including high-throughput data access, schema evolution, and large-scale column backfills.</li>
<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs.</li>
<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables.</li>
</ul>
<p>In-Office Requirement Statement:</p>
<ul>
<li>We let the type of work you do guide the collaboration style. That means we&#39;re not always working in an office, but we continue to gather for key moments of collaboration and connection.</li>
<li>This role will need to be in the office for in-person collaboration 1-2 times/quarter, and therefore can be situated anywhere in the country.</li>
</ul>
<p>Relocation Statement:</p>
<ul>
<li>This position is not eligible for relocation assistance.</li>
</ul>
<p>At Pinterest, we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$177,185-$364,795 USD</Salaryrange>
      <Skills>Python, AI, Cloud infrastructure, Containerized services, Docker, Kubernetes, Data lake storage, Metadata management, Big data, ML/AI innovations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7494960</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5a5a8459-f04</externalid>
      <Title>Engineering Manager of Managers, Data Platform</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p><strong>Who we are</strong></p>
<p>Stripe is a financial infrastructure platform for businesses. Millions of companies - from the world’s largest enterprises to the most ambitious startups - use Stripe to accept payments, grow their revenue, and accelerate new business opportunities.</p>
<p><strong>About the team</strong></p>
<p>The Big Data Infrastructure organization is a globally distributed team of approximately 40 engineers spread across Dublin, Bangalore, Seattle, and San Francisco. This team is the backbone of the company’s data ecosystem, responsible for building, scaling, and maintaining the highly reliable platforms that power data storage, orchestration, and processing at scale.</p>
<p>As the Head of Big Data Infra, you will lead a global, ~40-person engineering organization responsible for the foundational data platforms that drive the business. Reporting directly to the Head of Compute, you will define the strategic vision and roadmap for the company&#39;s data lake, orchestration pipelines, and batch computing environments.</p>
<p>The team&#39;s technical portfolio spans four core domains:</p>
<ul>
<li>Datalake (Storage): Managing scalable cloud storage and metadata layers, leveraging Amazon S3, Apache Iceberg (metastore and integrations), SAL, and Hive Metastore (HMS).</li>
</ul>
<ul>
<li>Data Orchestration: Ensuring robust pipeline execution and scheduling using Apache Airflow.</li>
</ul>
<ul>
<li>Batch Compute Infra (Data Store): Maintaining foundational data infrastructure and legacy systems, including Hadoop.</li>
</ul>
<ul>
<li>Batch Compute Experience (Data Processing): Optimizing and delivering powerful data processing environments utilizing Apache Spark and Apache Celeborn.</li>
</ul>
<p><strong>What you’ll do</strong></p>
<p>You will move beyond day-to-day management to act as an industry leader, effectively advocating for your organization&#39;s mission and impact. You will be expected to see problems others don&#39;t and rally people to independently create solutions.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Set Strategic Vision: Define the scope, vision, and goals for your organization with little or no guidance. You will anticipate industry trends to influence Stripe&#39;s long-range plans and set direction on a multi-year timeframe.</li>
</ul>
<ul>
<li>Lead at Scale: Manage the achievement of and accountability for broad swaths of programs. You will establish wide-ranging and scaled processes, anticipating and removing roadblocks across multiple teams.</li>
</ul>
<ul>
<li>Drive Operational Excellence: Instill a culture of rigorous thinking and meticulous craftsmanship. You will ensure your organization drives constant improvement in team processes and maintains high standards of operational rigor.</li>
</ul>
<ul>
<li>Indirect Influence: Use indirect influence to steer other teams toward making the right decisions for Stripe. You will effectively communicate your team&#39;s plan and how it links to Stripe&#39;s company vision to cross-functional stakeholders.</li>
</ul>
<ul>
<li>Obsess Over Talent: Proactively invest in the development of the organization and its people at all levels. You will recruit world-class talent and coach your direct reports,who are themselves managers - to elevate the skills of the leadership team.</li>
</ul>
<ul>
<li>Stewardship &amp; Culture: Act as an ambassador and advocate for Stripe, modeling ownership for all other Stripes. You will actively work to increase Stripe&#39;s inclusivity and diversity and use our operating principles to guide decision-making.</li>
</ul>
<p><strong>Who you are</strong></p>
<p>We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.</p>
<p><strong>Minimum requirements</strong></p>
<ul>
<li>Bachelor’s degree or equivalent practical experience with minimum 5 years of experience with software development.</li>
</ul>
<ul>
<li>Minimum 5 years of experience in a technical leadership role; overseeing strategic projects.</li>
</ul>
<ul>
<li>Minimum 3 years of Manager of Managers experience (managing other engineering managers).</li>
</ul>
<ul>
<li>Experience building diverse teams to tackle challenging technical problems.</li>
</ul>
<ul>
<li>Ability to thrive in a collaborative environment involving different stakeholders and subject matter experts.</li>
</ul>
<p><strong>Preferred qualifications</strong></p>
<ul>
<li>Strategic Ambiguity: Proven ability to translate chaos into clarity and navigate complex, high-impact work where you must define your own scope.</li>
</ul>
<ul>
<li>Infrastructure at Scale: Successfully shipped and operated critical infrastructure with significant responsibility over funds or critical data.</li>
</ul>
<ul>
<li>Cross-Functional Influence: A track record of getting other teams on board with your vision to support execution in a way that benefits the broader company.</li>
</ul>
<ul>
<li>Curiosity: You enjoy learning and diving into the nuts-and-bolts of how things work (e.g., global money movement rails, currency conversion, or inter-company flows).</li>
</ul>
<ul>
<li>Humility and Adaptability: You are humble and self-aware, with a history of adapting your management approach across different environments and seeking feedback to grow as a leader.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Strategic vision, Technical leadership, Project management, Team management, Communication, Problem-solving, Infrastructure at scale, Cross-functional influence, Curiosity, Humility and adaptability, Apache Iceberg, Apache Airflow, Apache Spark, Apache Celeborn, Amazon S3, Hive Metastore, SAL, Cloud storage, Metadata layers, Data orchestration, Batch computing infrastructure, Legacy systems, Hadoop, Global money movement rails, Currency conversion, Inter-company flows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7747391</Applyto>
      <Location>Seattle, San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>456f029f-2e2</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>As a Principal Software Engineer on our Go To Market Store (GTM Store) and ZoomInfo Data Platform (ZDP) team, you&#39;ll play a pivotal role in developing ZoomInfo&#39;s next-generation unified data platform.</p>
<p>You&#39;ll architect and implement infrastructure that powers our GraphQL-based federated query system for seamless data access across platforms including BigTable, BigQuery, and Solr+.</p>
<p>This is a unique opportunity to influence the technical direction of ZoomInfo&#39;s core data infrastructure, addressing complex challenges such as data freshness, multi-tenant isolation, and real-time data processing at scale.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable infrastructure for GTM Store and ZDP with sub-second query latency.</li>
<li>Architect and implement metadata-driven GraphQL APIs for dynamic schema generation and query federation.</li>
<li>Develop asynchronous secondary indexing systems for scaling capacity and reducing primary data store load.</li>
<li>Design real-time analytics streaming data pipelines from BigTable to BigQuery.</li>
<li>Develop data mutation and deletion frameworks supporting GDPR compliance and schema evolution.</li>
<li>Implement CDC pipelines and calculated field processing for derived data views.</li>
<li>Build observability and monitoring solutions for real-time issue diagnosis across distributed data systems.</li>
<li>Create batch and streaming data processing workflows for complex relationships at scale.</li>
<li>Collaborate with engineering leaders and product managers to define the technical roadmap.</li>
<li>Mentor engineers and establish best practices for cloud-native data infrastructure development.</li>
<li>Partner with cross-functional teams to address data platform requirements and challenges.</li>
<li>Drive solutions for data freshness, query performance, and system reliability challenges.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or related field (or equivalent experience).</li>
<li>10+ years of software engineering experience building large-scale data platforms.</li>
<li>Expertise with distributed NoSQL databases and data warehousing systems.</li>
<li>Strong experience with Java 8+, Scala, Kotlin, GoLang for data systems development.</li>
<li>Proven experience with GCP or AWS and cloud-native architectures.</li>
<li>Experience with streaming/real-time data processing technologies.</li>
<li>Strong system design skills for architecting multi-tenant, distributed systems.</li>
<li>Hands-on experience with Google Cloud Platform services.</li>
<li>Knowledge of CDC patterns, event sourcing, and streaming architectures.</li>
<li>Experience solving data freshness and consistency challenges in distributed systems.</li>
<li>Background in building observability and monitoring solutions for data platforms.</li>
<li>Familiarity with metadata management and schema evolution.</li>
<li>Experience with Kubernetes for deploying data services.</li>
<li>SQL query optimization and performance tuning expertise.</li>
<li>Experience building GraphQL APIs with federated or metadata-driven schema generation.</li>
<li>Strong problem-solving skills and the ability to debug complex distributed systems issues.</li>
<li>Excellent communication skills for explaining technical decisions to diverse audiences.</li>
<li>Self-directed with the ability to drive initiatives independently while collaborating with teams.</li>
<li>Passion for building reliable, observable, and maintainable systems.</li>
<li>Experience promoting diverse, inclusive work environments.</li>
</ul>
<p>Actual compensation offered will be based on factors such as the candidate’s work location, qualifications, skills, experience and/or training. Your recruiter can share more information about the specific salary range for your desired work location during the hiring process.</p>
<p>We want our employees and their families to thrive. In addition to comprehensive benefits we offer holistic mind, body and lifestyle programs designed for overall well-being. Learn more about ZoomInfo benefits here.</p>
<p>Below is the US base salary for this position. Additional compensation such as Bonus, Commission, Equity and other benefits may also apply.</p>
<p>$163,800-$257,400 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Java 8+, Scala, Kotlin, GoLang, GCP, AWS, cloud-native architectures, streaming/real-time data processing technologies, distributed NoSQL databases, data warehousing systems, metadata management, schema evolution, Kubernetes, SQL query optimization, performance tuning, GraphQL APIs, federated or metadata-driven schema generation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8243004002</Applyto>
      <Location>Remote-US-CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ceba9e5b-250</externalid>
      <Title>Senior Backend Engineer, Product and Infra</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Backend Engineer to build the systems and services that power our product experience. You&#39;ll own the backend infrastructure that makes our content discoverable, our features responsive, and our platform reliable at scale.</p>
<p>Your work will directly shape what users experience: designing APIs that serve rich content, building services that handle real-time interactions, implementing content-matching systems for rights and safety, and ensuring our platform performs under load. You&#39;ll architect systems that are fast, correct, and maintainable.</p>
<p>You&#39;ll collaborate closely with Product, ML Research, and Mobile/Web teams to ship features that matter. We use Python, Go, BigQuery, Pub/Sub, and a microservices architecture,but we care more about good judgment than specific tool experience.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and maintain application-level data models that organize rich content into canonical structures optimized for product features, search, and retrieval.</li>
<li>Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.</li>
<li>Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.</li>
<li>Implement and refine fingerprinting pipelines used for deduplication, rights attribution, safety checks, and provenance validation.</li>
<li>Own data consistency between ingestion systems, application surfaces, metadata storage, and downstream reporting environments.</li>
<li>Define and track key operational metrics, including latency, completeness, accuracy, and event health.</li>
<li>Collaborate with Product teams to ensure content structures and APIs support evolving features and high-quality user experiences.</li>
<li>Partner with Analytics and Research teams to deliver clean usage datasets for experimentation, model evaluation, reporting, and internal insights.</li>
<li>Operate large analytical workloads in BigQuery and build reusable Dataflow/Beam components for structured processing.</li>
<li>Improve reliability and scale by designing robust schema evolution strategies, idempotent pipelines, and well-instrumented operational flows.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building production backend services and APIs at scale</li>
<li>Experience building ETL/ELT pipelines, event processing systems, and structured data models for applications or analytics</li>
<li>Strong background in data modeling, metadata systems, indexing, or building canonical representations for heterogeneous content</li>
<li>Proficiency in Python, Go, SQL, and scalable data-processing frameworks (Dataflow/Beam, Spark, or similar)</li>
<li>Familiarity with BigQuery or other analytical data warehouses and strong comfort optimizing large queries and schemas</li>
<li>Experience with event-driven architectures, Pub/Sub, or Kafka-like systems</li>
<li>Strong understanding of data quality, schema evolution, lineage, and operational reliability</li>
<li>Ability to design pipelines that balance cost, latency, correctness, and scale</li>
<li>Clear communication skills and an ability to collaborate closely with Product, Research, and Analytics stakeholders</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience building application-facing APIs or microservices that expose structured content</li>
<li>Background in information retrieval, indexing systems, or search infrastructure</li>
<li>Experience with fingerprinting, perceptual hashing, audio similarity metrics, or content-matching algorithms</li>
<li>Familiarity with ML workflows and how downstream analytics and usage data feed back into research pipelines</li>
<li>Understanding of batch + streaming architectures and how to blend them effectively</li>
<li>Experience with Go, Next.js, or React Native for occasional full-stack contributions</li>
</ul>
<p><strong>Why Join Us</strong></p>
<p>You will design the core data services and pipelines that power our product experience, analytics, and business operations. You’ll work on high-impact data challenges involving real-time signals, large-scale metadata systems, and cross-platform consistency. You’ll join a small, fast-moving team where you’ll shape the structure, reliability, and intelligence of our downstream data ecosystem.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Highly competitive salary and equity</li>
<li>Quarterly productivity budget</li>
<li>Flexible time off</li>
<li>Fantastic office location in Manhattan</li>
<li>Productivity package, including ChatGPT Plus, Claude Code, and Copilot</li>
<li>Top-notch private health, dental, and vision insurance for you and your dependents</li>
<li>401(k) plan options with employer matching</li>
<li>Concierge medical/primary care through One Medical and Rightway</li>
<li>Mental health support from Spring Health</li>
<li>Personalized life insurance, travel assistance, and many other perks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $220,000</Salaryrange>
      <Skills>Python, Go, BigQuery, Pub/Sub, Data modeling, Metadata systems, Indexing, Canonical representations, ETL/ELT pipelines, Event processing systems, Structured data models, Scalable data-processing frameworks, Analytical data warehouses, Event-driven architectures, Kafka-like systems, Data quality, Schema evolution, Lineage, Operational reliability, Application-facing APIs, Microservices, Information retrieval, Indexing systems, Search infrastructure, Fingerprinting, Perceptual hashing, Audio similarity metrics, Content-matching algorithms, ML workflows, Batch + streaming architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Udio</Employername>
      <Employerlogo>https://logos.yubhub.co/udio.com.png</Employerlogo>
      <Employerdescription>Udio is a technology company that powers product experiences.</Employerdescription>
      <Employerwebsite>https://www.udio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/udio/jobs/4987729008</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>62efca6f-b6f</externalid>
      <Title>Senior AI Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior AI Engineer who is obsessed with building AI systems that actually work in production: reliable, observable, cost-efficient, and genuinely useful. This is not a research role. You will ship AI-powered features that process real financial data for real businesses.</p>
<p>LLM &amp; AI Pipeline Engineering - Design, build, and maintain production-grade LLM integration pipelines , including retrieval-augmented generation (RAG), prompt engineering, output parsing, and chain orchestration.</p>
<p>Develop and operate AI features within Jeeves&#39;s core financial products: spend categorization, document extraction, anomaly detection, financial Q&amp;A, and automated reconciliation.</p>
<p>Implement structured output validation, fallback handling, and confidence scoring to ensure AI decisions meet reliability standards for financial use cases.</p>
<p>Evaluate and integrate AI frameworks and tools (LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases) and advocate for the right tool for the job.</p>
<p>Establish prompt versioning and evaluation practices to ensure AI outputs remain accurate and consistent as models and data evolve.</p>
<p>Retrieval &amp; Vector Search - Design and maintain vector search pipelines using databases such as Pinecone, Weaviate, or pgvector to power semantic search and RAG-based features.</p>
<p>Build document ingestion and chunking pipelines for Jeeves&#39;s financial data , processing invoices, receipts, policy documents, and transaction records.</p>
<p>Optimize retrieval quality through embedding model selection, chunk strategy, metadata filtering, and re-ranking techniques.</p>
<p>ML Model Serving &amp; Operations - Collaborate with data scientists to take trained ML models from experimental notebooks to production serving infrastructure.</p>
<p>Build and maintain model serving endpoints with appropriate latency SLOs, input validation, and output monitoring.</p>
<p>Implement model performance monitoring and data drift detection to ensure production models remain accurate over time.</p>
<p>Support model retraining workflows by designing clean data pipelines and feature engineering that can be continuously updated.</p>
<p>Backend Integration &amp; Reliability - Integrate AI services cleanly with Jeeves&#39;s backend microservices , designing clear API contracts, circuit breakers, and graceful degradation patterns.</p>
<p>Write high-quality, testable backend code in Python or Go/Node.js to power AI-integrated features.</p>
<p>Instrument AI components with structured logging, distributed tracing, latency dashboards, and alerting to ensure operational visibility.</p>
<p>Collaboration &amp; Growth - Partner with Product, Backend Engineering, and Data Science to define the AI roadmap and translate requirements into reliable systems.</p>
<p>Contribute to a culture of quality by writing design docs, reviewing peers&#39; AI system designs, and sharing learnings openly.</p>
<p>Help grow the AI engineering practice at Jeeves by establishing patterns, tooling, and best practices that the broader team can build on.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>LLM, AI, Python, LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases, Pinecone, Weaviate, pgvector, semantic search, RAG-based features, document ingestion, chunking pipelines, embedding model selection, chunk strategy, metadata filtering, re-ranking techniques, model serving infrastructure, latency SLOs, input validation, output monitoring, model performance monitoring, data drift detection, clean data pipelines, feature engineering, API contracts, circuit breakers, graceful degradation patterns, structured logging, distributed tracing, latency dashboards, alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system built for global businesses that provides corporate cards, cross-border payments, and spend management software within one unified platform. It operates across 20+ countries and serves over 5,000 clients.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/ded9e04e-f18e-4d4c-ae43-4b7882c6200b</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>f0f321c2-15d</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world&#39;s most advanced digital asset platform for institutions to participate in crypto. Join the Data Platform team and build the Trusted Data Platform that powers Anchorage&#39;s transition to Data 3.0.</p>
<p>You&#39;ll help shape the unified orchestration foundation, collaborate on governance-as-code patterns, and contribute to self-service frameworks that make quality and compliance automatic. We&#39;re moving from manual spreadsheets and theoretical architectures to automated control planes where every dataset is trusted, monitored, and traceable by default.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Collaborate on designing and implementing unified orchestration patterns (Dagster/Airflow) to replace legacy and fragmented scheduling</li>
<li>Develop governance-as-code systems in partnership with the team that automatically apply policy tags, RLS, and access controls through an active control plane</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Help guide the technical design for platform capabilities like data contracts, automated quality gating, observability, and cost visibility</li>
<li>Support the migration of workloads from legacy patterns to the modern platform, ensuring domain teams have clear paths and golden templates</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Partner with domain teams (Asset Data, Reporting &amp; Statements, Product teams) to understand their needs and design platform capabilities that enable their success</li>
<li>Promote and support data mesh principles and dbt best practices, helping domain owners build and own their data products while platform ensures quality</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Promote data platform engineering best practices, developer experience, and &#39;Data as a Product&#39; principles across the engineering organization</li>
<li>Contribute to architectural decisions and help establish engineering culture around reliability, cost efficiency, and operational excellence</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5-7+ years building data platforms or infrastructure: You bring experience helping design and operate modern data platforms that handle enterprise-scale workloads with quality, governance, and cost controls</li>
<li>Strong dbt and SQL expertise: You&#39;re proficient with dbt and SQL, understand dbt Mesh, and have strong opinions on data modeling, testing, and documentation best practices</li>
<li>Orchestration experience: You&#39;ve implemented production data orchestration with Airflow, Dagster, Prefect, or similar tools, and understand the trade-offs between different orchestration patterns</li>
<li>Cloud data warehouse proficiency: You have strong experience with BigQuery, Snowflake, or Redshift, including query optimization, cost management, and security configurations</li>
<li>Platform mindset: You think in terms of golden paths, reusable abstractions, and developer experience - you build systems that let others move fast safely</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>Metadata and catalog experience: You&#39;ve worked with Atlan, Collibra, DataHub, or similar metadata platforms and understand active governance patterns</li>
<li>Data observability tools: You&#39;ve implemented data quality monitoring with Great Expectations, Monte Carlo, Soda, or similar tools</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices for data infrastructure</li>
<li>You&#39;re the kind of person who gets excited about declarative config, immutable infrastructure, and metrics dashboards showing cost-per-query trending down</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, SQL, Airflow, Dagster, Prefect, BigQuery, Snowflake, Redshift, Metadata and catalog experience, Data observability tools, Infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/8a325cd5-ef99-4f1e-bba8-7bb1fca64f12</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1c431665-20b</externalid>
      <Title>Data Governance and Management Lead</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto. We are seeking a Data Governance &amp; Management Lead within the Global Analytics team to help develop and implement data controls, data quality standards, and governance practices across the platform.</p>
<p>This role supports data integrity, metadata, and access controls to help ensure data is accurate, consistent, and fit for purpose. This is a hands-on role that requires strong technical fluency, structured problem-solving, and the ability to translate governance requirements into practical implementations within data systems.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Working knowledge of data governance, data management, and data quality frameworks</li>
<li>Experience supporting the implementation of data controls within data pipelines and reporting systems</li>
<li>Advanced proficiency in SQL, Python, or other data query and analysis tools</li>
<li>Proficiency with business intelligence and data visualization tools such as Looker, Power BI, or Tableau</li>
<li>Experience with database design, including understanding complex data schemas and data extraction</li>
<li>Familiarity with data lineage, metadata management, and data modeling concepts</li>
<li>Ability to define and implement data quality rules and validation checks</li>
<li>Understanding of data access principles, including role-based access and data classification</li>
<li>Ability to document data processes and controls clearly and in a structured way</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Oversee the data governance program, identify improvement areas, and implement best practices to enhance data quality, integrity, and security</li>
<li>Develop and implement data quality standards and monitoring processes, including establishing data quality metrics and thresholds</li>
<li>Assist in managing the data issue lifecycle, including tracking and supporting remediation efforts</li>
<li>Manage the data governance platform (Atlan) and serve as the primary subject matter expert</li>
<li>Assist in data classification efforts, including identifying and categorizing sensitive data and critical data elements</li>
<li>Manage external data requests, including regulatory inquiries, ensuring compliance with banking regulations</li>
<li>Monitor and report on key data governance metrics and KPIs, providing insights and recommendations to senior management</li>
<li>Lead data governance meetings and workshops, facilitating discussions and decision-making to drive the data governance program forward</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Have a deep understanding of Anchorage Digital’s strategy and business lines.</li>
<li>Understand how data supports decision-making and operational processes across the organization</li>
<li>Possess strategic thinking and vision, with the ability to develop and implement a comprehensive data governance strategy aligned with organizational goals and objectives</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Able to communicate complex issues clearly and credibly to a wide range of audiences.</li>
<li>Document data processes, controls, and findings clearly for internal stakeholders</li>
<li>Build effective relationships and rapport with stakeholders, including cross-functional and external partners</li>
<li>Communicate, organize, and execute cross-team goals and projects, leveraging relationships and resources to solve problems</li>
<li>Collaborate with Data Platform, InfoSec, Product, and Engineering partners</li>
</ul>
<p><strong>You may be a fit for this role if you have:</strong></p>
<ul>
<li>Bachelor’s degree required. Advanced degrees or certifications in data analytics or governance preferred</li>
<li>4–7 years of experience in data governance, data management, data quality, or data analytics</li>
<li>Hands-on experience implementing or supporting data quality and governance practices</li>
<li>Experience managing data classification, access controls, and external data requests</li>
<li>Experience working with data pipelines, reporting systems, or analytical datasets</li>
<li>Experience writing, editing, or reviewing technical documentation for regulatory or banking contexts</li>
<li>Strong attention to detail, with a focus on accuracy, completeness, and consistency in data governance processes and controls</li>
<li>Ability to work independently on defined tasks and contribute to team objectives</li>
<li>Strong problem-solving skills and comfort working in structured, detail-oriented environments</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>You&#39;ve kept up to date with the proliferation of blockchain and crypto innovations.</li>
<li>You were emotionally moved by the soundtrack to Hamilton, which chronicles the founding of a new financial system. :)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data governance, data management, data quality frameworks, SQL, Python, Looker, Power BI, Tableau, database design, data lineage, metadata management, data modeling, data access principles, role-based access, data classification</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/5bfbd64c-933e-418c-9c07-5aea50212c0d</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>251e58d5-61e</externalid>
      <Title>FBS Salesforce Agile dev team member IV</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>This role is responsible for designing and developing architecture delivery (integration, process, applications, data and technology) with alignment to the enterprise architecture vision, strategy and roadmap. You will require in-depth conceptual and practical knowledge in System Architecture and basic knowledge of related job disciplines. You will have knowledge of best practices and how your area integrates with others; you will be aware of the competition and the factors that differentiate you in the market.</p>
<p><strong>Essential Job Functions:</strong></p>
<ul>
<li>10+ years of experience with Salesforce with 5+ years as a Salesforce Architect</li>
<li>End to End implementation experience with Salesforce platform. Collaborate and align with platform architects, and product owners to translate business requirements into solutions on Salesforce.</li>
<li>Should have strong experience in hands-on development using out-of-box features and custom development with Apex, Visualforce and other force.com programming languages including high proficiency in Lightning Web Components(HTML, CSS, Javascript)</li>
<li>Integration experience using Web based technologies (Soap, Rest) and Middleware tools such as Mulesoft</li>
<li>Experience with Release Management, Source Control, and Deployment concepts and technologies such as  ANT, SFDC Metadata API, Jenkins</li>
<li>Hands on experience on Agile framework like SAFe and DevOps.</li>
<li>Develop and maintain Salesforce technical roadmaps and strategies</li>
<li>Develop and enforce Salesforce governance and security policies.</li>
<li>Conduct Salesforce performance optimization and tuning.</li>
<li>Strong experience in Sales and Service cloud including chat.</li>
<li>Experience with other Salesforce products like Financial Cloud , Experience Cloud, Marketing Cloud would be beneficial</li>
<li>Salesforce Admin, Developer and Architect Certifications</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Salesforce, Apex, Visualforce, Lightning Web Components, Mulesoft, ANT, SFDC Metadata API, Jenkins, SAFe, DevOps, Salesforce Admin, Salesforce Developer, Salesforce Architect</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is one of the world&apos;s largest insurance groups, providing a wide range of insurance and financial services products with gross written premium well over US$25 Billion.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/98oFAaHeFVmWECrwG8buNi/hybrid-fbs-salesforce-agile-dev-team-member-iv-in-chennai-at-capgemini</Applyto>
      <Location>Chennai, Tamil Nadu, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ee2fcbdc-fc4</externalid>
      <Title>Principal Consultant - Data Architecture</Title>
      <Description><![CDATA[<p><strong>Principal Consultant - Data Architecture</strong></p>
<p>You will act as a senior technical leader in complex data and analytics engagements, shaping and governing end-to-end enterprise data architectures, leading technical teams, and serving as a trusted technical advisor for clients and internal stakeholders.</p>
<p><strong>About Your Role</strong></p>
<p>As a Principal Data Architecture Consultant, you will be responsible for ensuring that enterprise data and analytics solutions are scalable, secure, and production-ready, while translating business requirements into robust technical designs and delivery roadmaps.</p>
<p><strong>Your Role Will Include:</strong></p>
<ul>
<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>
<li>Translate business objectives into scalable, secure, and compliant data solutions</li>
<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>
<li>Guide delivery teams through implementation, rollout, and production readiness</li>
<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>
<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>
<li>Support pre-sales and solution design activities from a technical perspective</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>
<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>
<li>Strong client-facing experience in complex enterprise environments</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong expertise in modern data architectures, including:</li>
<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>
<li>Modern Data Architecture design principles</li>
<li>Batch and streaming data integration patterns</li>
<li>Data Platform, DevOps, deployment and security architectures</li>
<li>Analytics and AI enablement architectures</li>
<li>Hands-on experience with cloud data platforms, e.g.:</li>
<li>Azure, AWS or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>
<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>
<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>
<li>Solid understanding of API-based and event-driven architectures</li>
<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience with data pipelines, orchestration, and automation</li>
<li>Familiarity with CI/CD concepts and production-grade deployments</li>
<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>
</ul>
<p><strong>Data Management &amp; Governance</strong></p>
<ul>
<li>Strong understanding of data management and governance principles, including:</li>
<li>Data quality, metadata, lineage, master data management</li>
<li>Data Management software and tools</li>
<li>Security, access control, and compliance considerations</li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>
<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>
<li>Hands-on Experience with data governance or metadata tools</li>
<li>Cloud, data, or architecture certifications</li>
</ul>
<p><strong>Language &amp; Mobility</strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, Azure, AWS or GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, Postgres, SQL Server, Oracle, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, Docker / Kubernetes, Advanced analytics, AI / ML or GenAI, Streaming platforms (e.g. Kafka, Azure Event Hubs), Data governance or metadata tools, Cloud, data, or architecture certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. The company is a mid-size player within the scale of Infosys, a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/uuSzzCt8qNbo6UpEFkSyjY/hybrid-principal-consultant---data-architecture-in-london-at-infosys-consulting---europe</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>56dc9a51-e66</externalid>
      <Title>Principal Consultant - Data Architecture</Title>
      <Description><![CDATA[<p><strong>Principal Consultant - Data Architecture</strong></p>
<p>You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>
<p><strong>About Your Role</strong></p>
<p>As a Principal Data Architecture Consultant, you will act as a senior technical leader in complex data and analytics engagements. You will shape and govern end-to-end enterprise data architectures, lead technical teams, and serve as a trusted technical advisor for clients and internal stakeholders.</p>
<p><strong>Your Role Will Include:</strong></p>
<ul>
<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>
<li>Translate business objectives into scalable, secure, and compliant data solutions</li>
<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>
<li>Guide delivery teams through implementation, rollout, and production readiness</li>
<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>
<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>
<li>Support pre-sales and solution design activities from a technical perspective</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>
<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>
<li>Strong client-facing experience in complex enterprise environments</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong expertise in modern data architectures, including:</li>
<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>
<li>Modern Data Architecture design principles</li>
<li>Batch and streaming data integration patterns</li>
<li>Data Platform, DevOps, deployment and security architectures</li>
<li>Analytics and AI enablement architectures</li>
<li>Hands-on experience with cloud data platforms, e.g.:</li>
<li>Azure, AWS or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>
<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>
<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>
<li>Solid understanding of API-based and event-driven architectures</li>
<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience with data pipelines, orchestration, and automation</li>
<li>Familiarity with CI/CD concepts and production-grade deployments</li>
<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>
</ul>
<p><strong>Data Management &amp; Governance</strong></p>
<ul>
<li>Strong understanding of data management and governance principles, including:</li>
<li>Data quality, metadata, lineage, master data management</li>
<li>Data Management software and tools</li>
<li>Security, access control, and compliance considerations</li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>
<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>
<li>Hands-on Experience with data governance or metadata tools</li>
<li>Cloud, data, or architecture certifications</li>
</ul>
<p><strong>Language &amp; Mobility</strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong>Benefits</strong></p>
<p>You will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>enterprise data architecture, system data integration, data engineering, analytics, modern data architectures, Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, cloud data platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, SQL, relational databases, Postgres, SQL Server, Oracle, NoSQL databases, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, data migration programmes, data pipelines, orchestration, automation, CI/CD concepts, production-grade deployments, distributed systems, Docker, Kubernetes, data management and governance principles, data quality, metadata, lineage, master data management, data management software and tools, security, access control, compliance considerations, Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience, advanced analytics, AI / ML or GenAI, streaming platforms, Kafka, Azure Event Hubs, data governance or metadata tools, cloud, data, architecture certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. It is a mid-size player with a supportive, entrepreneurial spirit.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/hpBWjvvy8D6B1f818cHxZR/remote-principal-consultant---data-architecture-in-poland-at-infosys-consulting---europe</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>e93c43d7-d2a</externalid>
      <Title>Senior Growth Marketing Manager, Mobile &amp; Conversions</Title>
      <Description><![CDATA[<p><strong>Role Overview</strong></p>
<p>We&#39;re looking for a data-driven Sr. Growth Marketing Manager to own and optimize user acquisition and conversion across Replit&#39;s mobile app and website properties. You&#39;ll be responsible for driving top-of-funnel growth through App Store Optimization (ASO) and website conversion rate optimization (CRO) that turns visitors into engaged creators.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Own mobile app growth and App Store Optimization (ASO)</strong></p>
<ul>
<li>Lead ASO strategy to increase visibility, discoverability, and conversion in the App Store and Google Play</li>
<li>Optimize app store presence including metadata, keywords, screenshots, preview videos, ratings/reviews, and feature placements</li>
<li>Analyze performance data to continuously refine messaging, creative, and store listing elements</li>
</ul>
<p><strong>Drive website conversion rate optimization (CRO)</strong></p>
<ul>
<li>Own conversion optimization across Replit.com, including landing pages, product pages, and signup flows</li>
<li>Establish rigorous A/B testing programs for web experiences to improve visitor-to-signup and signup-to-activation rates</li>
<li>Collaborate with Design and Content teams to develop and test high-performing landing page variations</li>
</ul>
<p><strong>Build testing infrastructure and experimentation culture</strong></p>
<ul>
<li>Partner with Product and Engineering to implement testing frameworks and tools for both mobile and web</li>
<li>Develop hypotheses based on data analysis, user research, and competitive intelligence</li>
<li>Document learnings and share insights across the organization to inform broader product and marketing strategy</li>
</ul>
<p><strong>Analyze, measure, and report on performance</strong></p>
<ul>
<li>Build dashboards and reporting frameworks to measure mobile and web conversion performance</li>
<li>Identify friction points and conversion drop-offs across mobile app install flows and website signup funnels</li>
<li>Monitor competitive landscape and industry benchmarks to identify opportunities</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>10+ years of growth or performance marketing experience with significant focus on mobile app growth and website conversion optimization</li>
<li>Deep expertise in ASO and website CRO, including A/B testing, metadata optimization, landing page optimization, and improving conversion rates across the funnel</li>
<li>Hands-on experience with mobile measurement platforms and analytics tools</li>
<li>Strong analytical skills with proven ability to establish testing cultures – you design experiments, interpret data, and run rigorous programs that deliver measurable results</li>
<li>Experience partnering with Product and Engineering to implement tracking and optimize user experiences</li>
<li>Creative problem-solver with bias for experimentation – you iterate constantly and aren&#39;t afraid to fail fast</li>
<li>Experience marketing to non-technical audiences (creators, PMs, designers, business professionals) and optimizing for both B2C and B2B motions</li>
<li>Background in SaaS products, developer tools, AI-powered testing tools, or high-growth tech companies is a plus</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p><strong>Want to Learn More?</strong></p>
<ul>
<li>Meet the Replit Agent</li>
<li>Replit: Make an app for that</li>
<li>Replit Blog</li>
<li>Amjad TED Talk</li>
</ul>
<p><strong>Interviewing + Culture at Replit</strong></p>
<ul>
<li>Operating Principles</li>
<li>Reasons not to work at Replit</li>
</ul>
<p><strong>Compensation Range</strong></p>
<p>$165K - $215K</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165K - $215K</Salaryrange>
      <Skills>App Store Optimization (ASO), Website Conversion Rate Optimization (CRO), A/B Testing, Metadata Optimization, Landing Page Optimization, Mobile Measurement Platforms, Analytics Tools, Data Analysis, User Research, Competitive Intelligence, Testing Frameworks, Experimentation Culture, SaaS Products, Developer Tools, AI-Powered Testing Tools, High-Growth Tech Companies</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is an agentic software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/8d9be172-0df0-44ad-a2a5-f72fac887ca0</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>d3f7efba-55c</externalid>
      <Title>Simulation Environments Engineer</Title>
      <Description><![CDATA[<p><strong>Simulation Environments Engineer</strong></p>
<p><strong>About the Team</strong></p>
<p>Our Robotics team is focused on unlocking general-purpose robotics and pushing towards AGI-level intelligence in dynamic, real-world settings. Working across the entire model stack, we integrate cutting-edge hardware and software to explore a broad range of robotic form factors. We strive to seamlessly blend high-level AI capabilities with the constraints of physical systems to improve peoples’ lives.</p>
<p><strong>About the Role</strong></p>
<p>We are hiring a <strong>Simulation Environments Engineer</strong> to build the tooling and infrastructure that enable high-coverage, realistic virtual environments for robotics research and evaluation. This role is focused on _creating the systems_ (not necessarily hand-crafting every asset) that let researchers and engineers describe, visualize, generate, and validate task environments at scale. You will design pipelines for importing and vetting 3rd-party content, author procedural and randomized scenario generators, and ship ergonomic tools that make environment creation fast, repeatable, and testable. This role sits at the intersection of game-engine practice, asset engineering, and large-scale simulation infrastructure.</p>
<p><strong>This role is based in San Francisco, CA, and requires in-person 3 days a week.</strong></p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build interactive and programmatic tooling to describe, preview, and validate scenes and tasks so researchers can author scenarios quickly and repeatedly.</li>
</ul>
<ul>
<li>Create content pipelines to curate, convert, optimize and quality-check assets (visual + collision) from third-party collections and internal sources; define standards so assets behave predictably across engines and tasks.</li>
</ul>
<ul>
<li>Implement robust importers and adapters that bring environments and setups from Isaac/Unity/Unreal/Omniverse/other repos into our sim pipelines while preserving fidelity and ensuring performance.</li>
</ul>
<ul>
<li>Build frameworks for procedural generation and controlled randomization (visual, physical, kinematic) so models see a systematic, measurable variety of conditions.</li>
</ul>
<ul>
<li>Define and enforce quality gates for environments (visual fidelity, collision correctness, physical plausibility) and instrument validation tooling so environments meet realism/coverage goals.</li>
</ul>
<ul>
<li>Connect environment tooling to CI/CD, presubmit checks, large-scale simulation farms and model-eval pipelines so environments can be tested automatically and run at scale. (You’ll partner with sim-pipelines and sim-realism owners.)</li>
</ul>
<ul>
<li>Create processes and templates to onboard new object libraries and contracted asset work; provide clear acceptance tests and automation for vendor deliverables.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Enjoy building ergonomic tooling that empowers other engineers and researchers to produce high-quality environments quickly.</li>
</ul>
<ul>
<li>Have practical experience with modern world engines (NVIDIA Isaac Sim, Unity, Unreal Engine, Omniverse) or equivalent production pipelines and can choose and integrate the right platform for each use case.</li>
</ul>
<ul>
<li>Are comfortable with the full content pipeline: CAD/asset import, USD/GLTF/FBX/texture workflows, collision mesh generation, LODs, and material/physics metadata.</li>
</ul>
<ul>
<li>Have built or used procedural generation and domain randomization systems to produce broad, task-relevant variability.</li>
</ul>
<ul>
<li>Care about quality control and validation — you like to design automated checks and visual/quantitative diagnostics that ensure environments are correct and performant.</li>
</ul>
<ul>
<li>Can collaborate across functions — you’ll work closely with researchers, physics/realism engineers, SWE/RE, and vendors to ensure environments are both realistic and actionable for ML training and evaluation.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>modern world engines, NVIDIA Isaac Sim, Unity, Unreal Engine, Omniverse, procedural generation, domain randomization, CAD/asset import, USD/GLTF/FBX/texture workflows, collision mesh generation, LODs, material/physics metadata, game-engine practice, asset engineering, large-scale simulation infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a private company with a large team of researchers and engineers.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/39cd0dd8-520d-4932-80bf-7495a1d1d11b</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>582f2e18-5a0</externalid>
      <Title>Procurement Operations Leader</Title>
      <Description><![CDATA[<p><strong>Procurement Operations Leader</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Finance</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$234K – $295K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Procurement Operations team runs the engine that turns intent into spend at OpenAI.</p>
<p>We sit between the people who need things (research, product, engineering, operations) and the systems that record, pay for, and report on them. Our job is to make it easy for teams to buy what they need while ensuring every dollar lands in the right place — on the right supplier, on the right PO, with the right data — so Finance, Legal, and Security can trust the outcome.</p>
<p>As OpenAI scales, this team is also where AI, automation, and self-service come to life. We design workflows that let more volume move touchlessly, while embedding controls and audit trails directly into the systems — so speed and rigor grow together. This team works side-by-side with Accounts Payable, Strategic Sourcing, Legal, Security, and Finance Systems to ensure the company can move fast without losing financial integrity or control.</p>
<p><strong>About the Role</strong></p>
<p>The Procurement Operations Leader owns the operational backbone of how OpenAI turns requests into committed, controlled, and payable spend. This role sits at the center of our purchasing lifecycle — ensuring that every request, PO, and supplier record is complete, policy-aligned, and ready to scale through automation, BPO support, and AI-assisted workflows already in production.</p>
<p>You’ll be responsible for the health of the intake-to-PO engine: how work enters the system, how it gets validated, how it routes, and how it lands in our financial systems. That includes strengthening the controls that keep us safe, the data that makes us fast, and the workflows that allow AI and self-service to do more of the work.</p>
<p>This is a player-coach role. You’ll design the systems and guardrails that enable scale, and you’ll also step into the queue when something breaks, when volume spikes, or when a complex case needs hands-on leadership.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the intake-to-PO operating model, including request validation, routing, PO creation, change management, and supplier master data.</li>
</ul>
<ul>
<li>Ensure every request and PO is complete, policy-aligned, and audit-ready, with clean metadata that supports invoice matching and reporting.</li>
</ul>
<ul>
<li>Increase first-pass invoice match rates by improving upstream data quality and exception logic.</li>
</ul>
<ul>
<li>Reduce P2P cycle time through automation, self-service, and disciplined queue management.</li>
</ul>
<ul>
<li>Codify approval rules, exception handling, and segregation of duties directly into Zip, Oracle, and connected systems.</li>
</ul>
<ul>
<li>Expand and refine AI-assisted triage, validation, and routing so more volume moves through the system with less manual touchpoints.</li>
</ul>
<ul>
<li>Build and maintain dashboards and operational metrics to track intake health, cycle time, exception rates, and control performance.</li>
</ul>
<ul>
<li>Partner with Legal, Security, TPRM, and Finance Systems to ensure controls and policies are reflected in how work actually flows.</li>
</ul>
<ul>
<li>Serve as the escalation point when operational breakdowns occur — and lead the resolution through to a clean outcome.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years of experience running procurement, P2P, or intake-driven operations in fast-scaling environments.</li>
</ul>
<ul>
<li>A strong understanding of how upstream intake and metadata quality drive downstream invoice accuracy, audit readiness, and financial clarity.</li>
</ul>
<ul>
<li>A passion for transforming messy, manual processes into structured, scalable, automated systems.</li>
</ul>
<ul>
<li>Leveraged AI/automation to improve quality, speed, and scale.</li>
</ul>
<ul>
<li>Used data and metrics to spot risk, bottlenecks, and opportunities to simplify.</li>
</ul>
<ul>
<li>A strong grasp of controls, segregation of duties, and compliance requirements across Procurement, Legal, and Finance.</li>
</ul>
<ul>
<li>Comfort being a technical leader and a player-coach.</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Competitive salary and bonus structure</li>
</ul>
<ul>
<li>Comprehensive benefits package</li>
</ul>
<ul>
<li>Opportunities for professional growth and development</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment</li>
</ul>
<ul>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you are a motivated and experienced professional looking for a new challenge, please submit your application, including your resume and a cover letter, to [insert contact information]. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$234K – $295K</Salaryrange>
      <Skills>Procurement, P2P, Intake-driven operations, AI, Automation, Self-service, Workflow design, Controls, Audit trails, Data quality, Metadata, Invoice matching, Reporting, Approval rules, Exception handling, Segregation of duties, Compliance, Risk management, Bottleneck identification, Opportunity analysis, Technical leadership, Player-coach, Cloud-based systems, Data analytics, Business process improvement, Change management, Communication, Collaboration, Leadership, Coaching, Mentoring</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing and applying artificial intelligence to help humans learn, work, and create. It is a fast-scaling organisation with a significant presence in the tech industry.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/54aa42d3-9823-4b88-8807-d7b87a08858c</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>e9e336c5-ad3</externalid>
      <Title>Software Engineer, Privacy Infrastructure</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Privacy Infrastructure</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Security</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>OpenAI’s Privacy Engineering team sits at the intersection of Security, Privacy, Legal, and Core Infrastructure. Our mission is to build data infrastructure and systems to support our privacy, legal, and security teams—securely, quickly, and at scale. Our guiding principles include: defensibility by default, enabling researchers, preparing for future transformative technologies, and building a robust security culture.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for a Software Engineer who can design and operate technical systems that support legal compliance workflows, including secure data processing and document review. You’ll partner daily with Legal, Security, IT, and partner engineering teams to turn legal processes into concrete technical workflows. This role is ideal for an engineer who loves large-scale data problems and understands the rigor required when the results may be scrutinized.</p>
<p>This position is located in San Francisco. Relocation assistance is available.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and operate data storage pipelines that can operate at scale.</li>
</ul>
<ul>
<li>Build search &amp; discovery services (e.g., Spark/Databricks, index layers, metadata catalogs) based on the needs of partner teams.</li>
</ul>
<ul>
<li>Automate secure data transfers—encrypting, checksumming, and auditing exports to reviewers.</li>
</ul>
<ul>
<li>Stand up locked-down compute environments that balance usability with security controls.</li>
</ul>
<ul>
<li>Instrument monitoring and KPIs that maintain accountability of data holds and productions.</li>
</ul>
<ul>
<li>Collaborate cross-functionally to codify SOPs, threat models, and chain-of-custody documentation that withstand scrutiny.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have hands-on experience building or operating large-scale data-lake or backup systems (Azure, AWS, GCP).</li>
</ul>
<ul>
<li>Know your way around Terraform or Pulumi, CI/CD, and can turn ad-hoc legal requests into repeatable pipelines.</li>
</ul>
<ul>
<li>Comfortable working with discovery workflows (legal holds, enterprise document collections, secure review) or eager to build expertise quickly.</li>
</ul>
<ul>
<li>Able to communicate technical concepts — from storage governance to block-ID APIs — clearly to teams such as Legal, Engineering, and others.</li>
</ul>
<ul>
<li>Have shipped secure solutions that balance speed, cost, and evidentiary defensibility—and can articulate the trade-offs.</li>
</ul>
<ul>
<li>Communicate crisply, document rigorously, and enjoy working across disciplines under tight deadlines.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $325K • Offers Equity</Salaryrange>
      <Skills>Terraform, Pulumi, CI/CD, Spark/Databricks, index layers, metadata catalogs, Azure, AWS, GCP, large-scale data-lake or backup systems, secure data transfers, compute environments, monitoring and KPIs, SOPs, threat models, chain-of-custody documentation, hands-on experience building or operating large-scale data-lake or backup systems, comfortable working with discovery workflows, able to communicate technical concepts clearly to teams such as Legal, Engineering, and others, have shipped secure solutions that balance speed, cost, and evidentiary defensibility</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a privately held company.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/07153f7c-7e8b-4283-a879-cb07a224e083</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>ec06a431-7fa</externalid>
      <Title>Software Engineer - Privacy &amp; Compliance</Title>
      <Description><![CDATA[<p><strong>Software Engineer - Privacy &amp; Compliance</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco; Seattle</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p>We’re looking for a <strong>Software Engineer</strong> to architect and build backend systems that enforce data privacy and automate compliance at scale. You’ll work closely with product, infrastructure, security, and legal teams to embed privacy-by-design into our data and access layers.</p>
<p>This is a hands-on, high-impact role for an experienced engineer who is passionate about protecting user data while enabling innovation.</p>
<p><strong><strong>What You’ll Do</strong></strong></p>
<ul>
<li>Design, build, and operate backend services that enforce policy-driven data access, lifecycle controls, and privacy protections.</li>
</ul>
<ul>
<li>Develop distributed authorization and identity-aware enforcement mechanisms integrated directly into data services and control planes.</li>
</ul>
<ul>
<li>Implement auditability, policy hooks, and enforcement observability to ensure compliance is continuously verifiable.</li>
</ul>
<ul>
<li>Partner with Security, Legal, and Compliance to convert privacy requirements into scalable technical designs and developer-friendly APIs.</li>
</ul>
<ul>
<li>Harden data platforms and backend services through schema-level controls and data handling constraints by default.</li>
</ul>
<ul>
<li>Collaborate with infrastructure teams to ensure consistent enforcement across systems while minimizing duplicated implementations.</li>
</ul>
<ul>
<li>Contribute patterns, libraries, and education that elevate trustworthy data access patterns across the organization.</li>
</ul>
<p><strong><strong>You Might Thrive in This Role If You Have</strong></strong></p>
<ul>
<li><strong>5+ years of industry experience</strong> building and operating backend or infrastructure systems in production.</li>
</ul>
<ul>
<li><strong>Strong software engineering fundamentals</strong>, with fluency in at least one major programming language (e.g., Python, Go, Rust, C++, Java).</li>
</ul>
<ul>
<li>Experience with distributed authorization, RBAC/ACL systems, encryption-based access, or policy engines.</li>
</ul>
<ul>
<li><strong>Familiarity with global privacy regulations</strong> and their architectural implications.</li>
</ul>
<ul>
<li><strong>Ability to influence and collaborate</strong> with teams across legal, compliance, product, and engineering.</li>
</ul>
<ul>
<li>A <strong>bias toward practical, impactful solutions</strong> that balance privacy protections with product needs.</li>
</ul>
<p><strong><strong>Nice to Have</strong></strong></p>
<ul>
<li>Experience with cloud platforms (e.g., Azure, AWS, GCP) and large-scale data systems.</li>
</ul>
<ul>
<li>Background in security engineering, privacy engineering, or data governance.</li>
</ul>
<ul>
<li>Experience with control-plane or metadata-driven enforcement systems.</li>
</ul>
<ul>
<li>Exposure to data platforms or ML infrastructure.</li>
</ul>
<ul>
<li>Prior experience in a regulated or highly sensitive data environment.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>Python, Go, Rust, C++, Java, Distributed authorization, RBAC/ACL systems, Encryption-based access, Policy engines, Global privacy regulations, Cloud platforms, Large-scale data systems, Security engineering, Privacy engineering, Data governance, Control-plane or metadata-driven enforcement systems, Data platforms, ML infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/23b158fe-709e-4bf5-856c-d10953d32f60</Applyto>
      <Location>San Francisco, Seattle</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>9278e637-313</externalid>
      <Title>Software Engineer, Core Services</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Software Engineer, Core Services</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Core Services team is responsible for building and managing foundational services. It acts as the bridge between core infrastructure (e.g. compute, storage, networking) and product engineering teams, and enables product teams to move fast, build reliably, and scale efficiently.</p>
<p><strong>About the Role</strong></p>
<p>As a software engineer in the core services team, you will design and operate critical backend platforms such as caching systems, workflow orchestration, metadata stores, and file services. You’ll focus on building highly reliable, scalable, and performant systems that serve as the backbone of our products.</p>
<p>We’re looking for people who are passionate about building infrastructure that empowers product teams, love working on distributed systems challenges, and enjoy creating well-designed APIs and abstractions that accelerate development.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, build, and maintain shared infrastructure services such as caching layers, workflow orchestration (Temporal), metadata stores, and file storage services.</li>
</ul>
<ul>
<li>Collaborate with product teams to provide scalable, reliable primitives that abstract the complexities of distributed systems.</li>
</ul>
<ul>
<li>Improve performance, resilience, and scalability of core services that power customer-facing applications.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience with distributed systems, caching infrastructure (e.g., Redis, Memcached), metadata storage (e.g., FoundationDB), or workflow orchestration (e.g., Temporal, Cadence).</li>
</ul>
<ul>
<li>Have experience running containerized services in cloud environments and integrating them into automated build/test/release (CI/CD) workflows.</li>
</ul>
<ul>
<li>Understand trade-offs in consistency models, replication strategies, and performance optimization in multi-region systems.</li>
</ul>
<ul>
<li>Excel at communication and collaboration with cross-functional teams, and are obsessed with delivering customer success.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>distributed systems, caching infrastructure, metadata storage, workflow orchestration, containerized services, cloud environments, automated build/test/release (CI/CD) workflows, consistency models, replication strategies, performance optimization, communication and collaboration, cross-functional teams, customer success</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/21bfde35-ffec-42d2-a2c6-8a03dad789d5</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>440a65d7-eed</externalid>
      <Title>Software Engineer - Sensing, Consumer Products</Title>
      <Description><![CDATA[<p><strong>Software Engineer - Sensing, Consumer Products</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Consumer Products</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Consumer Products Research prototypes the future of computing: we explore new modalities, interaction patterns, and system behaviors, then do the engineering required to make those ideas real in rigorous prototypes. The Neosensing team sits at the intersection of sensing, edge algorithms, and systems engineering. We build the end-to-end software that turns new signals into dependable capabilities—collection tooling and protocols, algorithm integration and evaluation hooks, and on-device loops that stay stable under real-world variability. We care deeply about software quality and iteration speed: clean interfaces, debuggability, observability, and performance under tight device constraints.</p>
<p><strong>About the Role</strong></p>
<p>As a Software Engineer on Consumer Products Research, you’ll sit at the boundary between algorithm development and shippable systems. You’ll work closely with algorithm engineers to translate prototypes into clean interfaces, reliable pipelines, and efficient on-device implementations—with strong attention to performance, observability, and real-world failure modes.</p>
<p>This is a software role first: we’re looking for someone who loves writing great code every day, takes pride in engineering craft, and is comfortable going deep enough into the algorithmic details to make the system work end-to-end.</p>
<p><strong>This role is based in San Francisco, CA. We use a hybrid work model of four days in the office per week and offer relocation assistance to new employees.</strong></p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build and ship production software for sensing algorithms, translating algorithm prototypes into reliable end-to-end systems.</li>
</ul>
<ul>
<li>Implement and own key parts of the Python shipping pipeline (integration surfaces, evaluation hooks, and quality/performance guardrails).</li>
</ul>
<ul>
<li>Develop embedded/on-device software in an RTOS environment (e.g., Zephyr) and deploy models to device runtimes and hardware accelerators.</li>
</ul>
<ul>
<li>Optimize real-time on-device perception loops (e.g., detection/tracking-style pipelines) for stability, latency, power, and memory constraints.</li>
</ul>
<ul>
<li>Create data collection + instrumentation tooling to bring up new sensing modalities and accelerate iteration from prototype → dataset → model → device.</li>
</ul>
<ul>
<li>Partner cross-functionally (algorithms, human data, firmware/hardware) to debug, profile, and harden systems against real-world variability.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Love writing great software and want your work to sit close to novel sensing and edge algorithms.</li>
</ul>
<ul>
<li>Understand algorithm behavior well enough to integrate, debug, and evaluate it—even if you’re not the primary model inventor.</li>
</ul>
<ul>
<li>Have shipped production Python systems and care about clean interfaces, tests, and long-term maintainability.</li>
</ul>
<ul>
<li>Enjoy embedded/on-device work and can debug across hardware, firmware, and higher-level application layers.</li>
</ul>
<ul>
<li>Care about performance engineering and know how to profile and optimize under tight device constraints.</li>
</ul>
<ul>
<li>Take ownership end-to-end and thrive in ambiguous, fast-moving, zero-to-one environments.</li>
</ul>
<p><strong>Bonus:</strong></p>
<ul>
<li>Zephyr (or similar RTOS) experience.</li>
</ul>
<ul>
<li>On-device ML deployment (NPU/GPU/DSP) and accelerator-aware profiling/optimization.</li>
</ul>
<ul>
<li>Background in multimodal sensing, sensor fusion, or on-device perception.</li>
</ul>
<ul>
<li>Experience building data collection systems and human-in-the-loop workflows (protocols, QA, metadata)</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences of our users and the broader community.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$325K • Offers Equity</Salaryrange>
      <Skills>Python, Zephyr, RTOS, Embedded/on-device software development, Data collection and instrumentation tooling, Algorithm integration and evaluation, Clean interfaces and long-term maintainability, Performance engineering and profiling/optimization, Zephyr (or similar RTOS) experience, On-device ML deployment (NPU/GPU/DSP) and accelerator-aware profiling/optimization, Background in multimodal sensing, sensor fusion, or on-device perception, Experience building data collection systems and human-in-the-loop workflows (protocols, QA, metadata)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. They push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through their products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/f6dfb6c0-44af-4512-af8c-967b8bb12867</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>95c83623-137</externalid>
      <Title>Salesforce Developer</Title>
      <Description><![CDATA[<p><strong>What you'll do</strong></p>

<p>You&#39;ll develop scalable Salesforce solutions and connect complex system landscapes with each other. You&#39;ll have fun translating technical requirements into robust technical implementations.</p>
<p>As a Salesforce Developer, you&#39;ll work at the interface between business and technology. You&#39;ll develop high-performance Salesforce functions, integrate third-party systems, and ensure a high level of quality through structured reviews and tests.</p>
<p><strong>Responsibilities</strong></p>

<ul>
<li>Salesforce Development: You&#39;ll develop Salesforce functions and adapt them, expand existing standard features, and implement new technical solutions.</li>
<li>System Integration: You&#39;ll accompany the integration of third-party systems in Salesforce and conduct data migrations.</li>
<li>Quality Assurance: You&#39;ll support the quality of the platform through structured code reviews, accompany test activities, and contribute to a stable and sustainable solution through iterative improvements.</li>
</ul>
<p><strong>What you need</strong></p>

<p>To be well-prepared for your way as a Salesforce Developer (all genders), you&#39;ll have the following qualifications in your luggage:</p>
<ul>
<li>Abgeschlossenes Studium in Informatik oder einem vergleichbaren Bereich sowie 3-5 Jahre Berufserfahrung in Salesforce-Entwicklung und Systemintegration.</li>
<li>Erfahrungen mit den Salesforce Core Clouds sowie fundierte Kenntnisse in Apex, JavaScript, HTML, CSS, Lightning Web Components (LWC), Flows, Custom Metadata, SOQL/SOSL und Integrationen über REST-/SOAP-APIs und Webhooks sind erforderlich.</li>
<li>Zertifizierungen in Agile Software Development, DevOps und zusätzliche Salesforce-Zertifikate (z. B. Administrator, Core Clouds, Platform Developer) sind ein Plus.</li>
<li>Leidenschaft für moderne Salesforce-Lösungen, saubere Integrationen und die Verbindung von fachlichen Anforderungen mit technischer Umsetzung.</li>
<li>Deine Arbeitsweise ist strukturiert, analytisch und lösungsorientiert.</li>
</ul>
<p><strong>Why this matters</strong></p>

<p>As a Salesforce Developer, you&#39;ll be part of a team that&#39;s shaping the future of digital transformation. You&#39;ll have the opportunity to work on exciting projects, collaborate with talented colleagues, and contribute to the company&#39;s growth and success.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Salesforce Development, System Integration, Quality Assurance, Apex, JavaScript, HTML, CSS, Lightning Web Components (LWC), Flows, Custom Metadata, SOQL/SOSL, REST-/SOAP-APIs, Webhooks, Agile Software Development, DevOps, Salesforce-Zertifikate</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>MHP - A Porsche Company</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitalizes processes and products for its customers and accompanies them in their IT transformations along the entire value chain. As a digitalization pioneer in the sectors of mobility and manufacturing, MHP transfers its expertise to various industries and is the premium partner for thought leaders on the way to a better tomorrow.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=19160</Applyto>
      <Location>Deutschlandweit &amp; Hybrid Work</Location>
      <Country></Country>
      <Postedate>2025-12-17</Postedate>
    </job>
  </jobs>
</source>