<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>a7f761de-2d8</externalid>
      <Title>Software Developer – Microsoft Power Platform &amp; Dynamics 365</Title>
      <Description><![CDATA[<p>As a Software Developer – Microsoft Power Platform &amp; Dynamics 365 at Porsche Engineering Romania, you will play a key role in the company&#39;s digitalization transformation. Working in a SAFe environment, you will deliver business applications and process automation using the Microsoft Power Platform and Dynamics 365, enabling scalable, data-driven, and AI-ready solutions.</p>
<p>Your responsibilities will include designing, developing, and maintaining business solutions using the Microsoft Power Platform (Power Apps, Power Automate, Power BI, Dataverse). You will build and enhance both model-driven and canvas Power Apps that align with business requirements. Additionally, you will develop and configure Dynamics 365 solutions, working with out-of-the-box capabilities as well as custom extensions.</p>
<p>You will also implement business process automation with Power Automate, including integrations with D365 and external systems. Furthermore, you will design and maintain Power BI dashboards and reports that enable data-driven decision making. You will support data and document migration activities for Dynamics 365 and Dataverse.</p>
<p>You will participate in requirements workshops and translate business needs into technical solutions. You will collaborate with cross-functional and international teams in a SAFe/Agile environment. You will ensure solution quality, security, performance, and maintainability by following best practices.</p>
<p>You will provide technical input, documentation, and support throughout the solution lifecycle. You will act as a trusted technical contributor for customers and internal stakeholders.</p>
<p>To succeed in this role, you will need to have successfully completed a Bachelor&#39;s or Master&#39;s degree in Information Technology or an equivalent education. You will also need to have at least 2–5 years of experience developing solutions with Microsoft Power Platform and/or Dynamics 365.</p>
<p>You will need to have hands-on experience with Power Apps (model-driven and/or canvas), Power Automate, Dataverse, Power BI, and Power Virtual Agents. You will also need to be experienced in configuring and customizing Dynamics 365, whether on-premises or in the cloud.</p>
<p>You will need to have a solid understanding of ETL concepts, data modeling, and system integrations. You will also need to have worked with Dynamics 365 integrations and data exchange between systems.</p>
<p>You will need to be familiar with licensing concepts for both the Power Platform and Dynamics 365. You will also need to be able to explain technical solutions in clear, business-friendly language.</p>
<p>You will demonstrate strong analytical, problem-solving, and organizational skills. You will speak English fluently, and German is considered an advantage.</p>
<p>You will communicate effectively, collaborate well in teams, and have a strong commitment to delivering high-quality engineering services to customers.</p>
<p>You are willing to travel when required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Microsoft Power Platform, Dynamics 365, Power Apps, Power Automate, Power BI, Dataverse, ETL concepts, data modeling, system integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Porsche Engineering Services GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>Porsche Engineering Romania specializes in complex technical solutions, including the development of intelligent and connected electric vehicles, electronics, and design.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=20059</Applyto>
      <Location>Cluj</Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>ad0816b8-15c</externalid>
      <Title>Software Developer – Microsoft Power Platform &amp; Dynamics 365</Title>
      <Description><![CDATA[<p>The Software Developer – Microsoft Power Platform &amp; Dynamics 365 role at Porsche Engineering Romania plays a key role in the company&#39;s digitalization transformation. Working in a SAFe environment, the role delivers business applications and process automation using the Microsoft Power Platform and Dynamics 365, enabling scalable, data-driven, and AI-ready solutions. By translating business requirements into high-quality digital solutions, the role supports efficient processes, system integration, and innovation across the organization.</p>
<p>Key responsibilities include designing, developing, and maintaining business solutions using the Microsoft Power Platform (Power Apps, Power Automate, Power BI, Dataverse), building and enhancing both model-driven and canvas Power Apps, developing and configuring Dynamics 365 solutions, implementing business process automation with Power Automate, and supporting data and document migration activities for Dynamics 365 and Dataverse.</p>
<p>The ideal candidate will have successfully completed a Bachelor&#39;s or Master&#39;s degree in Information Technology or an equivalent education, with at least 2–5 years of experience developing solutions with Microsoft Power Platform and/or Dynamics 365. They will also have hands-on experience with Power Apps (model-driven and/or canvas), Power Automate, Dataverse, Power BI, and Power Virtual Agents, as well as a solid understanding of ETL concepts, data modeling, and system integrations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Microsoft Power Platform, Dynamics 365, Power Apps, Power Automate, Power BI, Dataverse, ETL concepts, data modeling, system integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Porsche Engineering Services GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>Porsche Engineering Romania specializes in complex technical solutions, including the development of intelligent and connected electric vehicles, electronics, and design.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=20058</Applyto>
      <Location>Cluj</Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>5f4a61c4-ad0</externalid>
      <Title>Associate Manager, Amazon Insights and Analytics</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking an Associate Manager, Amazon Insights and Analytics to join our Consumer Health team. As a key member of our marketing department, you will be responsible for transforming complex Amazon datasets into actionable sales strategies. Your primary goal will be to identify trends in customer behavior, help optimize promotional spend, and provide actionable insights to support revenue growth.</p>
<p>Your tasks and responsibilities will include: Transforming weekly performance data into concise, executive-ready performance drivers and drags that explain why sales moved and what the immediate next step should be. Trend identification: highlighting emerging consumer search patterns before they become mainstream. Analyzing Amazon Brand Analytics and profitero and Circana data to understand consumer performance and market share performance. Conducting post-event deep dives for tentpole events (e.g., Prime Day, Black Friday) to measure ROI and inform future forecasting, promo depth, and participation. Monitoring competitor pricing moves and in-market performance to highlight any action that needs to be taken. Conversion optimization: monitoring the Buy Box and Glance Views to alert the Sales team of any traffic drops or conversion leaks that require immediate action. Consumer sentiment analysis: monitoring review counts and star ratings to provide Sales with feedback on product quality or Frequently Bought Together trends. Dashboard ownership: designing and maintaining automated sales reporting frameworks using Power BI, Tableau, Excel, and any new reporting tools. KPI management: defining and monitoring critical metrics, including Topline Sales, Glance Views, Conversion Rate (CR), Subscription, Profitability, and additional priority KPIs. Standardized reporting: delivering weekly, monthly, and quarterly business reviews to sales team and key internal stakeholders, highlighting Wins and Opportunities based on sales volume and market share. New item launches: creating the Launch Blueprint for new SKUs, using historical category data to set realistic sales targets and advertising benchmarks. Tracking competitor out-of-stock events, pricing, or promotional changes to provide Sales with live-time opportunities/challenges.</p>
<p>To succeed in this role, you will need: A bachelor&#39;s degree in Business, Finance, Economics, Statistics, or a related analytical field. 2-4+ years of experience in e-commerce analytics, retail, or a CPG environment. Advanced Excel skills, including pivot tables, VLOOKUP/XLOOKUP, and complex data modeling. Proven experience building dashboards in Power BI or Tableau. A sales-first mindset, with the ability to see a data point and immediately translate it into a revenue-generating idea. Agility, with comfort working in a fast-paced environment where data is needed quickly. Experience with third-party Amazon tools, such as Pacvue, Helium10, CommerceIQ, Stackline, NielsenIQ, Profitero, or similar tools. Amazon fluency, with expertise in Vendor Central and familiarity with Amazon Marketing Cloud.</p>
<p>As an Associate Manager, Amazon Insights and Analytics, you can expect to be paid a salary of approximately $115-173k, with additional compensation possible through a bonus or incentive program. Benefits include health care, vision, dental, retirement, PTO, sick leave, and more.</p>
<p>If you&#39;re interested in joining our team and contributing to our mission of Health for all, Hunger for none, please apply now.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$115-173k</Salaryrange>
      <Skills>data analysis, Amazon insights, analytics, Power BI, Tableau, Excel, VLOOKUP/XLOOKUP, complex data modeling, sales strategy, trend identification, consumer behavior, promotional spend, ROI analysis, forecasting, advertising benchmarks, third-party Amazon tools, Pacvue, Helium10, CommerceIQ, Stackline, NielsenIQ, Profitero, Amazon fluency, Vendor Central, Amazon Marketing Cloud</Skills>
      <Category>Marketing</Category>
      <Industry>Healthcare</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops and manufactures a wide range of healthcare products.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976752151</Applyto>
      <Location>Whippany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7275ef33-009</externalid>
      <Title>Staff Data Engineer</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>
<p>Communicating Between Technical and Non-Technical Colleagues</p>
<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>
<p>Data Analysis and Synthesis</p>
<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>
<p>Data Development Process</p>
<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>
<p>Data Innovation</p>
<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>
<p>Data Integration Design</p>
<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>
<p>Data Modeling</p>
<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>
<p>Metadata Management</p>
<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>
<p>Problem Resolution</p>
<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>
<p>Programming and Build</p>
<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>
<p>Technical Understanding</p>
<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>
<p>Testing</p>
<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,400 to $171,600</Salaryrange>
      <Skills>Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor&apos;s degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops and manufactures a wide range of healthcare products.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976928777</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>90b5ac1d-d16</externalid>
      <Title>Senior Software Engineer, Backend — Frontier Data</Title>
      <Description><![CDATA[<p>The Frontier Data team builds the data and systems that power Scale&#39;s most advanced Frontier AI use cases. We&#39;re looking for a Senior Backend Engineer who thrives in ambiguity, moves fast, and enjoys tackling daunting challenges.</p>
<p>As a Senior Backend Engineer, you will own major backend systems for frontier agentic data products, driving projects from early exploration through production deployment. You will build scalable services and pipelines that support agent workflows, architect modular, reusable backend systems, and operate in high-ambiguity environments.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and building scalable systems while partnering closely with research, product, operations, and other engineering teams</li>
<li>Building scalable services and pipelines that support agent workflows</li>
<li>Architecting modular, reusable backend systems that adapt to evolving product needs</li>
<li>Operating in high-ambiguity environments and breaking down open-ended problems</li>
<li>Partnering cross-functionally with product, research/ML, and infrastructure teams</li>
</ul>
<p>Ideal experience includes 5+ years of full-time software engineering experience, strong backend engineering fundamentals, and experience building systems that scale.</p>
<p>Compensation packages at Scale include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors.</p>
<p>Additional benefits include comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Distributed systems, API design, Data modeling, Production reliability, Docker, Containerized development/production environments, SQL, Modern database-backed application development, Async processing, Workflow engines, Data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Frontier Data</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Frontier Data develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4648525005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3fa0b80f-842</externalid>
      <Title>Staff Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>Job Title: Staff Software Engineer, Public Sector</p>
<p>We are seeking a highly skilled Staff Software Engineer to join our Public Sector team. As a Staff Software Engineer, you will be responsible for designing and implementing software solutions for the public sector. You will work closely with cross-functional teams to develop and deploy software applications that meet the needs of government agencies.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement software solutions for the public sector</li>
<li>Work closely with cross-functional teams to develop and deploy software applications</li>
<li>Collaborate with stakeholders to understand their needs and develop software solutions that meet those needs</li>
<li>Develop and maintain software documentation</li>
<li>Participate in code reviews and ensure that code meets quality standards</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or related field</li>
<li>5+ years of experience in software development</li>
<li>Proficiency in programming languages such as Java, Python, or C++</li>
<li>Experience with Agile development methodologies</li>
<li>Strong understanding of software design patterns and principles</li>
<li>Excellent communication and collaboration skills</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s degree in Computer Science or related field</li>
<li>10+ years of experience in software development</li>
<li>Experience with cloud-based technologies such as AWS or Azure</li>
<li>Experience with DevOps practices</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p>Salary Range: $252,000-$362,000 USD</p>
<p>Required Skills:</p>
<ul>
<li>Full Stack Development</li>
<li>Cloud-Native Technologies</li>
<li>Data Engineering</li>
<li>AI Application Integration</li>
<li>Problem Solving</li>
<li>Collaboration and Communication</li>
<li>Adaptability and Learning Agility</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with modern web development frameworks</li>
<li>Familiarity with cloud platforms</li>
<li>Understanding of containerization and container orchestration</li>
<li>Knowledge of ETL processes</li>
<li>Understanding of data modeling, data warehousing, and data governance principles</li>
<li>Familiarity with integrating Large Language Models</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$362,000 USD</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Experience with modern web development frameworks, Familiarity with cloud platforms, Understanding of containerization and container orchestration, Knowledge of ETL processes, Understanding of data modeling, data warehousing, and data governance principles, Familiarity with integrating Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674913005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5465ae79-24e</externalid>
      <Title>Analytics &amp; Systems Lead, Finance</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Analytics &amp; Systems Lead to join our Finance team. In this role, you will work closely with stakeholders across business finance, corporate finance, and accounting to design and develop internal tools and AI agents that automate workflows across finance and accounting.</p>
<p>You will design data models, prototype internal tools, and implement agent-driven workflows using our internal data infrastructure, system integration tooling, Scale&#39;s proprietary AI platform, and emerging AI tools.</p>
<p>Key responsibilities include designing and developing end-to-end agent-driven workflows, building scalable data models and pipelines, partnering with stakeholders to translate business requirements into technical requirements, and collaborating with engineering teams to develop internal tools.</p>
<p>Ideal candidates will have 5+ years of experience in data analytics, analytics engineering, or data science roles, expert knowledge of SQL and Python, and experience building internal tools, automation systems, or data products.</p>
<p>Compensation packages at Scale include base salary, equity, and benefits, with a base salary range of $200,000-$250,000 USD for this full-time position in San Francisco.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$200,000-$250,000 USD</Salaryrange>
      <Skills>SQL, Python, data analysis, data modeling, internal tools, automation systems, data products, JavaScript, modern automation tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673090005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10cff9d9-bba</externalid>
      <Title>Senior Compensation Partner</Title>
      <Description><![CDATA[<p>About Gusto</p>
<p>At Gusto, we&#39;re on a mission to grow the small business economy. We handle the hard stuff , payroll, health insurance, 401(k)s, and HR , so owners can focus on their craft and their customers.</p>
<p>With teams in Denver, San Francisco, and New York, we support more than 400,000 small businesses nationwide and are building a workplace that reflects the people we serve.</p>
<p>All full-time employees receive competitive base pay, benefits, and equity (RSUs) , because everyone who helps build Gusto should share in its success. Offer amounts are determined by role, level, and location. Learn more about our Total Rewards philosophy.</p>
<p>AI is a fundamental part of how work gets done at Gusto. We expect all team members to actively engage with AI tools relevant to their role and grow their fluency as the technology evolves. AI experience requirements vary by role and will be assessed during the interview process.</p>
<p>About the Role: The Senior Compensation Partner is a core member of Gusto&#39;s People team, responsible for building and scaling the compensation programs that help us attract, retain, and reward top talent. This role exists to bring rigor, fairness, and strategic clarity to how Gusto thinks about pay , operating at the intersection of data, policy, and people.</p>
<p>Operating with a systems-first, AI-aware mindset, the Senior Compensation Partner uses automation, scalable workflows, and well-designed tooling to improve decision quality, ensure compliance, and reduce reliance on manual, one-off analysis. This role partners closely with HR Business Partners, Finance, Legal, and business leaders to ensure our compensation programs are grounded in market reality and aligned to our values.</p>
<p>Success in this role is measured by the quality and fairness of compensation decisions, the scalability and reliability of our programs, and the degree to which the organization can operate with confidence , without constant intervention or heroics.</p>
<p>Gusto&#39;s compensation function, built on first-principles thinking, is at an inflection point , scaling from early foundations to a mature, high-trust operation. You&#39;ll have real ownership to shape how we design and deliver compensation as the company grows.</p>
<p>Here’s what you’ll do day-to-day:</p>
<ul>
<li>Develop, implement, and communicate Gusto&#39;s compensation programs, policies, and procedures, ensuring internal equity, external competitiveness, and alignment with business strategy.</li>
</ul>
<ul>
<li>Partner with and influence key stakeholders , including HRBPs, Finance, and senior leadership , to translate compensation philosophy into clear, actionable programs.</li>
</ul>
<ul>
<li>Lead annual compensation cycles (merit, equity refresh, benchmarking) and own the end-to-end process from data collection through communication.</li>
</ul>
<ul>
<li>Perform market analyses and compensation studies using survey data (e.g., Radford/Aon, Mercer, Pave) to recommend pay ranges, job levels, and offer positioning.</li>
</ul>
<ul>
<li>Use AI tools and automation thoughtfully to accelerate analysis, synthesize benchmarking data, and reduce planning and reporting load , while maintaining sound judgment around data quality and decision guardrails.</li>
</ul>
<ul>
<li>Design and maintain scalable compensation workflows and tooling , automating recurring analyses, audit processes, and reporting so programs run reliably without manual intervention.</li>
</ul>
<ul>
<li>Build and maintain compensation dashboards and data visualizations that give leaders real-time insight into pay equity, market positioning, and budget utilization.</li>
</ul>
<ul>
<li>Craft competitive, compelling offers to attract top-tier talent, and build playbooks to help recruiters and HRBPs communicate Gusto&#39;s total rewards story.</li>
</ul>
<ul>
<li>Ensure ongoing compliance with federal, state, and local pay regulations , including pay transparency laws, FLSA classification standards, and emerging pay equity requirements , and proactively flag legislative changes.</li>
</ul>
<ul>
<li>Partner with People Operations to integrate and optimize HRIS and compensation tools, ensuring data accuracy and system scalability as the company grows.</li>
</ul>
<p>Here’s what we&#39;re looking for:</p>
<ul>
<li>8+ years of experience in Compensation, Total Rewards, People Operations, or a related analytical function , ideally in a high-growth technology environment.</li>
</ul>
<ul>
<li>AI fluency: hands-on experience using AI tools or automation to accelerate analysis or synthesis, paired with strong judgment around trust and validation.</li>
</ul>
<ul>
<li>Hands-on experience running compensation cycles (merit, equity, benchmarking) and building scalable programs from early-stage to maturity.</li>
</ul>
<ul>
<li>Survey benchmarking fluency: direct experience with Radford/Aon, Mercer, or comparable compensation survey tools and methodologies.</li>
</ul>
<ul>
<li>Systems-first orientation: experience building durable workflows, automating reports, or designing operating mechanisms that reduce recurring manual work.</li>
</ul>
<ul>
<li>Strong analytical foundation: advanced spreadsheet skills (Excel, Google Sheets), comfort with data modeling, and experience building dashboards or visualizations that leaders actually use.</li>
</ul>
<ul>
<li>Knowledge of compensation and employment law, including FLSA, pay transparency statutes, and pay equity frameworks , or foundational knowledge with a clear drive to build expertise.</li>
</ul>
<ul>
<li>Clear, concise communicator who can translate complex pay data into narratives and recommendations for non-specialist audiences, including executives.</li>
</ul>
<ul>
<li>Comfort operating at multiple altitudes , from hands-on analysis and modeling to advising leadership on compensation philosophy and strategy.</li>
</ul>
<ul>
<li>High integrity and intellectual curiosity; someone who asks hard questions about fairness and market position, and isn&#39;t satisfied with the status quo.</li>
</ul>
<ul>
<li>Background in Mathematics, Finance, Statistics, Economics, or a related quantitative field is preferred but not required.</li>
</ul>
<p>Gusto offers competitive cash compensation, equity, and a comprehensive benefits package. Our cash compensation range for this role is $124,000 to $150,000 in Denver, and $152,000 to $185,000 in San Francisco and New York. Final offer amounts are determined by multiple factors, including candidate location, experience, and expertise. Gusto&#39;s Total Rewards philosophy is rooted in transparency, fairness, and the belief that people do their best work when they feel secure and valued. As a member of the compensation team, you&#39;ll have unique visibility into , and influence over , the programs that define that experience for every Gustie.</p>
<p>Gusto has physical office spaces in Denver, San Francisco, and New York City. Employees who are based in those locations will be expected to work from the office on designated days approximately 2-3 days per week (or more depending on role). The same office expectations apply to all Symmetry roles, Gusto&#39;s subsidiary, whose physical office is in Scottsdale. Note: The San Francisco office expectations encompass both the San Francisco and San Jose metro areas. When approved to work from a location other than a Gusto office, a secure, reliable, and consistent internet connection is required. This includes non-office days for hybrid employees.</p>
<p>Our customers come from all walks of life and so do we. We hire great people from a wide variety of backgrounds, not just because it&#39;s the right thing to do, but because it makes our company stronger. If you share our values and our enthusiasm for small businesses, you will find a home at Gusto. Gusto is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$124,000 to $150,000 in Denver, and $152,000 to $185,000 in San Francisco and New York</Salaryrange>
      <Skills>Compensation, Total Rewards, People Operations, Analytical function, AI tools, Automation, Survey benchmarking, Compensation survey tools, Data modeling, Spreadsheet skills, Data visualization, Pay equity frameworks</Skills>
      <Category>HR</Category>
      <Industry>Technology</Industry>
      <Employername>Gusto</Employername>
      <Employerlogo>https://logos.yubhub.co/gusto.com.png</Employerlogo>
      <Employerdescription>Gusto is a provider of payroll, health insurance, 401(k)s, and HR services to small businesses.</Employerdescription>
      <Employerwebsite>https://www.gusto.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gusto/jobs/7677492</Applyto>
      <Location>Denver, CO;San Francisco, CA;New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f1a5b85-116</externalid>
      <Title>Mission Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and motivated Mission Software Engineer to join our dynamic Federal Engineering team. As a part of this team, you will play a critical role in supporting Scale&#39;s government customers by scoping and developing onsite solutions.</p>
<p>Our scalable, high-performance platform is the foundation for these customer solutions, and your expertise will be instrumental in designing and implementing systems that can handle interactions with existing customer systems to help our products integrate into existing customer workflows.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work directly with customers to understand their problems and translate those into features in Scale&#39;s platform.</li>
<li>Be open to &gt;50% travel or relocation to a key customer geographic location.</li>
<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
<li>Implement end-to-end data integrations, syncing customer&#39;s data to Scale&#39;s platform and back.</li>
<li>Deploy and maintain Scale software at customer sites.</li>
<li>Develop customer requested features and work closely with them to ensure that they win customer love.</li>
<li>Build robust and reliable backend systems that can serve as standalone products, empowering customers to accelerate their own AI ambitions.</li>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
</ul>
<p>Ideal Candidate:</p>
<ul>
<li>Track record of success as a hybrid customer facing engineer, forward deployed software engineer, and ability to quickly adapt to different roles.</li>
<li>Prior experience developing with Python and JavaScript, or other modern software languages. Familiarity with Node and React is a plus.</li>
<li>Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus.</li>
<li>Linux experience: Understanding of shell scripting, operating systems, etc.</li>
<li>Networking experience: Understanding of networking technologies, configuration (ports, protocols, etc)</li>
<li>Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles.</li>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval.</p>
<p>Benefits:</p>
<ul>
<li>Comprehensive health, dental and vision coverage,</li>
<li>Retirement benefits,</li>
<li>A learning and development stipend,</li>
<li>Generous PTO,</li>
<li>Commuter stipend</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$138,000-$292,560 USD</Salaryrange>
      <Skills>Python, JavaScript, Node, React, Cloud-Native Technologies, Linux, Networking, Data Engineering, ETL, Data Modeling, Data Warehousing, Data Governance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4481921005</Applyto>
      <Location>Boston, Massachusetts ; Honolulu, HI; San Diego, CA; San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7e22bd51-5ef</externalid>
      <Title>Senior Product Manager, CRM</Title>
      <Description><![CDATA[<p>We are seeking a Senior Product Manager, CRM Backend to drive the strategy and execution of our core CRM platform and data model. As a member of the Client Engagement team, you will work closely with other Product Managers, Data Engineering, and various teams across the organization that rely on customer data. This is an opportunity for you to craft the fundamental data and platform strategy for our CRM offering, ensuring its scalability and long-term viability as a critical company asset.</p>
<p>In this role, you will define and own the CRM data model strategy, guide and execute scalable platform decisions for the CRM backend, oversee and define the strategy for third-party integrations, and leverage the latest technical advancements in data management. You will also lead and align a dedicated team of engineers, prioritize their work, and manage the technical roadmap for the CRM backend platform.</p>
<p>We are looking for a candidate with 4+ years of experience in product management, specifically focusing on backend systems, data models, or platform products. You should have a strong technical background and experience in data management, data modeling, and data strategy. Additionally, you should be well-versed in managing technical requirements for third-party integrations and data partnerships.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>product management, backend systems, data models, platform products, data management, data modeling, data strategy, third-party integrations, data partnerships</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Squarespace</Employername>
      <Employerlogo>https://logos.yubhub.co/squarespace.com.png</Employerlogo>
      <Employerdescription>Squarespace is a design-driven platform helping entrepreneurs build brands and businesses online. It has a team of over 1,700 employees and is headquartered in New York City, with offices in Dublin, Ireland, and Aveiro, Portugal.</Employerdescription>
      <Employerwebsite>https://www.squarespace.com/about/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/squarespace/jobs/7591635</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bfddfcc3-e38</externalid>
      <Title>Senior Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you will lead the development of a vertical feature or a horizontal capability to include defining requirements with stakeholders and implementation until it is accepted by the stakeholders.</p>
<p>You will:</p>
<p>Lead the design and implementation of scalable backend systems and distributed architectures for Federal customers. Manage the full lifecycle of feature development from requirement definition to deployment on classified networks. Direct the orchestration of asynchronous agent fleets to meet mission requirements. Lead customer engagements to translate mission needs into technical requirements. Own the communication with stakeholders to ensure implementation meets defined acceptance criteria. Conduct technical reviews and identify risks within machine learning infrastructure and model serving. Drive the platform roadmap by providing technical specifications for Federal product offerings.</p>
<p>Ideally you will have:</p>
<p>Full Stack Development: Proficiency in front-end, back-end development and infrastructure, including experience with modern web development frameworks, programming languages, and databases Cloud-Native Technologies: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment. Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus Data Engineering: Knowledge of ETL (Extract, Transform, Load) processes and experience in building data pipelines to integrate and process diverse data sources. Understanding of data modeling, data warehousing, and data governance principles AI Application Integration: Familiarity with integrating Large Language Models (LLMs) and building agentic workflows. Understanding of prompt engineering, retrieval-augmented generation (RAG), and agent orchestration is beneficial. Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to defining and evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$311,000 USD (San Francisco, New York, Seattle) $194,400-$279,000 USD (Hawaii, Washington DC, Texas, Colorado) $162,400-$233,000 USD (St. Louis)</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Docker, Kubernetes, AWS, Azure, GCP, ETL, data modeling, data warehousing, data governance, Large Language Models, prompt engineering, retrieval-augmented generation, agent orchestration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674911005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>53ee0ef3-c62</externalid>
      <Title>Staff Data Engineer, Analytics Data Engineering</Title>
      <Description><![CDATA[<p>We are looking for a Staff Data Engineer to join our Analytics Data Engineering (ADE) team within Data Science &amp; AI Platform. As a Staff Data Engineer, you will be responsible for solving cross-cutting data challenges that span multiple lines of business while driving standardization in how we build, deploy, and govern analytics pipelines across Dropbox.</p>
<p>This is not a maintenance role. We are modernizing our analytics platform, upgrading orchestration infrastructure, building shared and reusable data models with conformed dimensions, establishing a certified metrics framework, and laying the foundation for AI-native data development. You will partner closely with Data Science, Data Infrastructure, Product Engineering, and Business Intelligence teams to make this happen.</p>
<p>You will play a crucial role in establishing analytics engineering standards, designing scalable data models, and driving cross-functional alignment on data governance. You will get substantial exposure to senior leadership, shape the technical direction of analytics infrastructure at Dropbox, and directly influence how data powers product and business decisions.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the design and implementation of shared, reusable data models, defining shared fact tables, conformed dimensions, and a semantic/metrics layer that serves as the single source of truth across analytics functions</li>
</ul>
<ul>
<li>Drive standardization of data engineering practices across ADE and functional analytics teams, including pipeline patterns, CI/CD workflows, naming conventions, and data modeling standards</li>
</ul>
<ul>
<li>Partner with Data Infrastructure to modernize orchestration, improve pipeline decomposition, and establish secure dev/test environments with production data access</li>
</ul>
<ul>
<li>Architect and implement a shift-left data governance strategy, working with upstream data producers to establish data contracts, SLOs, and code-enforced quality gates that catch issues before production</li>
</ul>
<ul>
<li>Collaborate with Data Science leads and Product Management to translate metric definitions into reliable, certified data pipelines that power executive dashboards, WBR reporting, and growth measurement</li>
</ul>
<ul>
<li>Reduce operational burden by improving pipeline granularity, observability, and failure recovery, establishing runbooks and alerting standards that make on-call sustainable</li>
</ul>
<ul>
<li>Evaluate and integrate AI-native tooling into the data development lifecycle, enabling conversational data exploration with guardrails and AI-assisted pipeline development</li>
</ul>
<p>Requirements:</p>
<ul>
<li>BS degree in Computer Science or related technical field, or equivalent technical experience</li>
</ul>
<ul>
<li>12+ years of experience in data engineering or analytics engineering with increasing scope and technical leadership</li>
</ul>
<ul>
<li>12+ years of SQL experience, including complex analytical queries, window functions, and performance optimization at scale (Spark SQL)</li>
</ul>
<ul>
<li>8+ years of Python development experience, including building and maintaining production data pipelines</li>
</ul>
<ul>
<li>Deep expertise in dimensional data modeling, schema design, and scalable data architecture, with hands-on experience building shared data models across multiple business domains</li>
</ul>
<ul>
<li>Strong experience with orchestration tools (Airflow strongly preferred) and dbt, including pipeline design, scheduling strategies, and failure recovery patterns</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with Databricks (Unity Catalog, Delta Lake) and modern lakehouse architectures</li>
</ul>
<ul>
<li>Experience leading orchestration or platform modernization efforts at scale</li>
</ul>
<ul>
<li>Familiarity with data governance and observability tools such as Atlan, Monte Carlo, Great Expectations, or similar</li>
</ul>
<ul>
<li>Experience building or contributing to a metrics/semantic layer (dbt MetricFlow, Databricks Metric Views, or equivalent)</li>
</ul>
<ul>
<li>Track record of establishing data engineering standards and best practices in a federated analytics organization</li>
</ul>
<p>Compensation:</p>
<p>US Zone 2 $198,900-$269,100 USD</p>
<p>US Zone 3 $176,800-$239,200 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$198,900-$269,100 USD</Salaryrange>
      <Skills>SQL, Python, Dimensional data modeling, Schema design, Scalable data architecture, Orchestration tools, dbt, Databricks, Modern lakehouse architectures, Data governance and observability tools, Metrics/semantic layer</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox is a technology company that provides cloud storage and file sharing services.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7595183</Applyto>
      <Location>Remote - US: Select locations</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>69e8923b-c16</externalid>
      <Title>Senior Data Scientist</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Data Scientist to join our Research, Analytics &amp; Data Science (RAD) team. Our team uses data and insights to drive evidence-based decision-making, generating actionable insights about our customers, products, and business.</p>
<p>As a Senior Data Scientist, you&#39;ll partner with product teams to help them identify important questions and answer those questions with data. You&#39;ll work closely with product managers, designers, and engineers to develop key product success metrics, set targets, measure results, and outcomes, and size opportunities.</p>
<p>You&#39;ll design, build, and update end-to-end data pipelines, working closely with stakeholders to drive the collection of new data and the refinement of existing data sources and tables. You&#39;ll also partner closely with product researchers to build a holistic understanding of our customers, products, and business.</p>
<p>Increasingly, you&#39;ll use AI-assisted tools to accelerate analysis, coding, and insight generation. You&#39;ll identify opportunities to automate your own workflows and reduce time spent on repetitive tasks. You&#39;ll build scalable data products that enable stakeholders to self-serve insights and raise the bar for how AI is used within RAD.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Partnering with product teams to help them identify important questions and answer those questions with data</li>
<li>Working closely with product managers, designers, and engineers to develop key product success metrics, set targets, measure results, and outcomes, and size opportunities</li>
<li>Designing, building, and updating end-to-end data pipelines</li>
<li>Partnering closely with product researchers to build a holistic understanding of our customers, products, and business</li>
<li>Using AI-assisted tools to accelerate analysis, coding, and insight generation</li>
<li>Identifying opportunities to automate your own workflows and reduce time spent on repetitive tasks</li>
<li>Building scalable data products that enable stakeholders to self-serve insights</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years of experience working with data to solve problems and drive evidence-based decisions</li>
<li>Strong SQL skills and solid grounding in statistics</li>
<li>Experience working closely with product teams</li>
<li>Proven track record of delivering actionable insights that drive measurable impact with minimal supervision</li>
<li>Strong product intuition, business acumen, and ability to connect analysis to strategy</li>
<li>Excellent communication skills (technical and non-technical), with a focus on driving decisions and outcomes</li>
<li>Strong ownership, curiosity, and growth mindset</li>
<li>Experience with a scientific computing language (e.g., Python)</li>
</ul>
<p>Preferred skills include:</p>
<ul>
<li>Experience with data modeling and ETL pipelines (esp. dbt)</li>
<li>Experience building internal tools, data products, or self-serve analytics capabilities</li>
<li>Experience leveraging AI across the data workflow - from ideation and coding to analysis and communication</li>
</ul>
<p>Benefits include:</p>
<ul>
<li>Competitive salary and equity in a fast-growing start-up</li>
<li>Unlimited access to Claude Code and best-in-class AI tools; experimentation &amp; building is encouraged &amp; celebrated</li>
<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</li>
<li>Regular compensation reviews - we reward great work</li>
<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</li>
<li>Open vacation policy and flexible holidays so you can take time off when you need it</li>
<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</li>
<li>MacBooks are our standard, but we’re happy to get you whatever equipment helps you get your job done</li>
</ul>
<p>Experience Level: Senior Employment Type: Full-time Workplace Type: Hybrid Category: Engineering Industry: Technology Salary Range: Competitive salary and equity in a fast-growing start-up Required Skills: SQL, statistics, experience working with product teams, strong product intuition, business acumen, excellent communication skills, strong ownership, curiosity, and growth mindset, experience with a scientific computing language (e.g., Python) Preferred Skills: data modeling and ETL pipelines (esp. dbt), building internal tools, data products, or self-serve analytics capabilities, leveraging AI across the data workflow</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype></Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, statistics, experience working with product teams, strong product intuition, business acumen, excellent communication skills, strong ownership, curiosity, growth mindset, experience with a scientific computing language (e.g., Python), data modeling and ETL pipelines (esp. dbt), building internal tools, data products, or self-serve analytics capabilities, leveraging AI across the data workflow</Skills>
      <Category></Category>
      <Industry></Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is a customer service company that provides AI-powered solutions for businesses. Founded in 2011, it has nearly 30,000 global clients.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7749323</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>158a429c-4d8</externalid>
      <Title>Senior Data Scientist - Product Analytics</Title>
      <Description><![CDATA[<p>We are seeking a Senior Data Scientist to join our Research, Analytics &amp; Data Science (RAD) team. The RAD team uses data and insights to drive evidence-based decision-making. We&#39;re a team of data scientists and product researchers who use data to unlock actionable insights about our customers, products, and business.</p>
<p>As a Senior Data Scientist, you will partner with product teams to help them identify important questions and answer those questions with data. You will work closely with product managers, designers, and engineers to develop key product success metrics, set targets, measure results, and outcomes, and size opportunities.</p>
<p>Your responsibilities will include designing, building, and updating end-to-end data pipelines, working closely with stakeholders to drive the collection of new data and the refinement of existing data sources and tables. You will also partner closely with product researchers to build a holistic understanding of our customers, products, and business.</p>
<p>You will influence our product roadmap and product strategy through experimentation, exploratory analysis, and quantitative research. You will build and automate actionable models and dashboards, craft data stories, and share your findings and recommendations across R&amp;D and the broader company.</p>
<p>You will drive and shape core RAD foundations and help us improve how the RAD org operates.</p>
<p>We are looking for someone with 5+ years of experience working with data to solve problems and drive evidence-based decisions. You should have excellent SQL skills and experience of applying analytical and statistical approaches to problem-solving. You should also have a proven track record of initiating and delivering actionable analysis and insights that drive tangible impact with minimal supervision.</p>
<p>Excellent communication skills (technical and non-technical) and a focus on driving impact are essential. A strong growth mindset and sense of ownership, innate passion, and curiosity are also required.</p>
<p>Experience with a scientific computing language (such as R or Python) is necessary. Experience with BI/Visualization tools like Tableau, Superset, and Looker is a bonus. Experience working with product teams and leveraging AI tools to boost efficiency and creativity across the data science workflow is also desirable.</p>
<p>We offer a competitive salary and equity in a fast-growing start-up. We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen. Regular compensation reviews, life assurance, comprehensive health and dental insurance, open vacation policy, flexible holidays, paid maternity leave, and 6 weeks paternity leave are also part of our benefits package.</p>
<p>Our working policy is hybrid, with employees expected to be in the office at least three days per week. We have a radically open and accepting culture, avoiding divisive subjects to foster a safe and cohesive work environment for everyone.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Analytical and statistical approaches, Scientific computing language (R or Python), BI/Visualization tools (Tableau, Superset, Looker), Product teams experience, AI tools, Data modeling and ETL pipelines, Communication skills (technical and non-technical)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/6317929</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cc9af01d-1b9</externalid>
      <Title>Partner Business Systems &amp; AI Operations Lead</Title>
      <Description><![CDATA[<p>The Partner Business Systems &amp; AI Operations Lead will own the foundation of the Claude Partner Network, including the Salesforce partner data model, the partner platform stack, and the integrations between them. This role will also define and own the partner data quality standard, administer the partner platform stack, and build and operate the AI automation layer across the partner workflow stack.</p>
<p>Key responsibilities include owning the Salesforce partner data model end to end, administering the partner platform stack, defining and owning the partner data quality standard, partnering with the Business Process Manager to instrument every partner process, running access and configuration governance for partner systems, and building and operating the AI automation layer.</p>
<p>The ideal candidate will have five or more years in revenue systems, partner systems, or business systems roles with hands-on Salesforce administration or architecture experience, and will be able to translate a program rule into a schema, a validation rule, and an entitlement flow without a detailed specification.</p>
<p>Strong candidates may also have Salesforce Administrator or Platform App Builder certification, or experience with Experience Cloud or a PRM such as Impartner or Salesforce PRM, SQL fluency for data quality checks and ad hoc analysis, prior partner program or channel operations experience, and experience standing up a data quality program from the ground up.</p>
<p>The annual compensation range for this role is $215,000-$300,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$215,000-$300,000 USD</Salaryrange>
      <Skills>Salesforce administration, Salesforce architecture, Data modeling, Data quality, AI automation, Partner systems, Revenue systems, Business systems, Salesforce Administrator certification, Platform App Builder certification, Experience Cloud, PRM, SQL, Partner program operations, Channel operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191437008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>40db054b-06d</externalid>
      <Title>Senior Product Manager, Access</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Technical Product Manager to join our Access team within the Acuity Scheduling department. As a Senior Technical Product Manager for Acuity Scheduling, you&#39;ll own the systems that control how customers sign in, manage identity, and pay for the platform.</p>
<p>This is a hybrid role working 3 days per week from our Aveiro office. You will report to the Group Product Manager on the Acuity Scheduling team.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Product Ownership: Act as the primary product owner for your team, setting the roadmap and priorities based on technical feasibility, user impact, and business goals.</li>
<li>Technical Strategy &amp; Roadmap: Collaborate with engineers to define a scalable technical strategy that aligns with product goals, focusing on data architecture, systems design, and service-based solutions.</li>
<li>Cross-functional Collaboration: Partner with engineering, data science, and UX teams to understand requirements, manage trade-offs, and deliver solutions that balance speed and scalability.</li>
<li>Cross-organization Collaboration: Work directly with Squarespace Identity and Security teams to develop and realise a shared vision of a singular identity and authentication system.</li>
<li>Stakeholder Communication: Translate technical architecture and system requirements into clear, actionable items for stakeholders across the company, including senior leadership and non-technical teams.</li>
<li>Quality, Security &amp; Performance Optimisation: Focus on the system&#39;s stability, reliability, and scalability by working closely with the engineering team on continuous improvement, platform security and technical debt management.</li>
<li>Architecture Oversight: Guide architectural decisions to ensure optimal security, data flow, storage, and access within our product ecosystem. Advocate for sustainable choices in a service-oriented approach to component-based architecture.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience: 4-6 years in product management or a technical role with product ownership, preferably within a data-driven environment; ideally in an identity-centric role.</li>
<li>Technical Expertise: Strong background as the technical product lead on teams owning data architecture, systems design, and service-based architecture (e.g., microservices), with the ability to engage deeply in technical discussions and decisions.</li>
<li>Systems Thinking: Proven experience in end-to-end system thinking and design, including a strong grasp of component-based architectures, data storage options, and integration layers.</li>
<li>Data Architecture: Hands-on experience with data modeling, database design, and data warehousing principles, including familiarity with large data model improvement initiatives.</li>
<li>APIs &amp; Integration: Understanding of RESTful API design, OAuth, identity federation, and integration patterns to ensure seamless interoperability between services and systems.</li>
<li>Analytical Mindset: Proficiency in using data to inform decisions autonomously, including experience with data analysis and product analytics tools.</li>
<li>Communication Skills: Ability to communicate complex technical concepts to both technical and non-technical audiences, bridging the gap between product vision and technical execution.</li>
<li>Agile Experience: Familiarity with Agile methodologies, including backlog management, sprint planning, and cross-functional team collaboration.</li>
<li>Project Management: Familiarity with project management tools like Jira, Asana, or similar.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Technical Transformation: Experience leading teams and/or organisations from a monolithic architecture to a service-oriented one.</li>
<li>Identity Experience: High-level understanding of and experience with core user registration flows and OAuth as it pertains to user identity needs.</li>
<li>Technical Documentation: Experience documenting technical data architecture, service flows, and system dependencies to ensure alignment and knowledge-sharing within the team</li>
</ul>
<p><strong>Benefits &amp; Perks</strong></p>
<ul>
<li>Health insurance with 100% covered premiums for you, your spouse or partner, and dependent children, including medical, dental, and vision</li>
<li>Life and Disability Insurance</li>
<li>Pension benefits with employer match</li>
<li>Fertility and adoption benefits</li>
<li>Headspace mindfulness app subscription</li>
<li>Global Employee Assistance Program</li>
<li>Statutory paid time off and all statutory leaves, as required</li>
<li>Meal Allowance and Flex Benefits Account</li>
<li>Employee donation match to community organisations</li>
<li>In the easily accessible city centre of Aveiro</li>
<li>7 Global Employee Resource Groups (ERGs)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data architecture, systems design, service-based architecture, microservices, data modeling, database design, data warehousing, RESTful API design, OAuth, identity federation, integration patterns, data analysis, product analytics tools, Agile methodologies, backlog management, sprint planning, cross-functional team collaboration, project management tools, Jira, Asana</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Squarespace</Employername>
      <Employerlogo>https://logos.yubhub.co/squarespace.com.png</Employerlogo>
      <Employerdescription>Squarespace is a design-driven platform helping entrepreneurs build brands and businesses online. It has a team of over 1,700 employees and operates in more than 200 countries.</Employerdescription>
      <Employerwebsite>https://www.squarespace.com/about/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/squarespace/jobs/7698954</Applyto>
      <Location>Aveiro</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>704c1e4a-ec2</externalid>
      <Title>Engineering Manager, Guest &amp; Host</Title>
      <Description><![CDATA[<p>We are seeking an experienced Engineering Manager to join our Guest &amp; Host Services team. As an Engineering Manager, you will lead a team of talented software developers to prototype, build and improve hosting services. Your responsibilities will include exploring new product experiences, leading investments into new technical capabilities, collaborating with engineers to plan and sequence the delivery of new features and capabilities, and partnering with peer engineering teams to integrate new features that impact different areas within the product.</p>
<p>You will also be responsible for recruiting and nurturing exceptional engineering talent, fostering an inclusive team culture and environment that encourages collaboration, technical excellence, and innovation, and making sure the team&#39;s partnerships with key stakeholders in Engineering, Design, and Product are strong.</p>
<p>To be successful in this role, you will need to have 5+ years of engineering management experience, with 9+ years of relevant software development industry experience in a fast-paced tech environment. You will also need to have deep expertise with backend systems and complex data modeling in large-scale consumer applications, excellent communication and presentation skills, and an end-to-end, product-oriented mentality that transcends team boundaries and helps find globally optimal solutions.</p>
<p>As a member of our team, you will have the opportunity to work on a wide range of projects and contribute to the development of new features and capabilities that will help us achieve our mission of making it easier for people to host on Airbnb.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£120,000-£150,000 GBP</Salaryrange>
      <Skills>backend systems, complex data modeling, large-scale consumer applications, engineering management, software development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest online marketplaces for travel accommodations.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7532824</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3917fb4f-2ab</externalid>
      <Title>Full Stack Software Engineer</Title>
      <Description><![CDATA[<p>We are looking for a talented full stack software engineer to join our growing team at Anduril Labs in Washington, DC.</p>
<p>As a full stack software engineer in Anduril Labs, you will help bring innovative, next-generation concepts to life through proof-of-concept development and rapid prototyping using bleeding edge technologies.</p>
<p>The ideal candidate has exceptional software development and creative problem-solving skills, is a self-starter, and can quickly grasp complex concepts.</p>
<p>As a full stack software engineer, you possess the skills to architect, develop, and deploy distributed applications and services, including both front-end and back-end components.</p>
<p>You have experience with agile, end-to-end software development lifecycle and are comfortable developing and deploying code across Windows and Linux-based systems (including standalone bare-metal hardware, virtualized environments, and cloud-hosted platforms).</p>
<p>Embedded software development experience is a plus.</p>
<p>You are also proficient in integrating legacy code and systems, leveraging open-source technologies, and developing and utilizing APIs.</p>
<p>Additionally, you have a solid understanding of AI/ML core concepts (e.g., feature extraction, supervised vs. unsupervised learning, regression, classification, clustering, deep learning neural networks, NLP, LLMs, SLMs, model fine-tuning, prompt engineering, RAG) and hands-on experience developing (Gen)AI-enhanced applications or services.</p>
<p>We also expect candidates to have familiarity with database technologies (e.g., SQL, NoSQL, Graph DB, Vector DB) and experience with data modeling, data wrangling, analytics, and visualization.</p>
<p>Since Anduril Labs supports all Anduril businesses and product lines, you will have the unique opportunity to work closely with multi-disciplinary engineering and product development teams across the entire company.</p>
<p>This means you will get to directly contribute to the development of Anduril’s next-generation products and services.</p>
<p>So if you thrive in a dynamic environment that values creative problem-solving, love writing code, excel as both an individual contributor and team player, are eager to learn, and bring a can-do attitude, this role is for you.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Lead the development of prototypes to demonstrate advanced concepts in areas like autonomous and multi-agent systems, GenAI, advanced data analytics, quantum computing/sensing/networking/comms/machine learning, modeling, simulation, optimization, visualization, next-gen human-machine interfaces, heterogenous computing, and cybersecurity.</li>
</ul>
<ul>
<li>Own the entire Software Development Lifecycle from inception through development, testing, deployment, and documentation for Anduril Labs-developed software prototypes.</li>
</ul>
<ul>
<li>Interface and collaborate with other Anduril and customer engineering teams, and strategic partners.</li>
</ul>
<ul>
<li>Support Anduril- and customer-funded R&amp;D efforts.</li>
</ul>
<ul>
<li>Participate in field experiments and technology demonstrations.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>3+ years of programming with Python, C++, Java, Rust, Go, or JavaScript/TypeScript.</li>
</ul>
<ul>
<li>Proven software architecture and design skills.</li>
</ul>
<ul>
<li>Ability to quickly understand and navigate complex systems and established codebases.</li>
</ul>
<ul>
<li>AI/ML development using commercial and open-source AI frameworks, models, and tools (e.g., Jupyter Notebook, PyTorch, TensorFlow, Scikit-learn, OpenAI, Claude, Gemini, Llama, LangChain, YOLO, AWS Sagemaker, Bedrock, Azure AI, RAG).</li>
</ul>
<ul>
<li>Web app development (e.g., React, Angular, or Vue).</li>
</ul>
<ul>
<li>Cloud development (e.g., AWS, Azure, or GCP).</li>
</ul>
<ul>
<li>Data modeling and wrangling.</li>
</ul>
<ul>
<li>Networking basics (e.g., DNS, TCP/IP vs. UDP, socket communications, LDAP, Active Directory).</li>
</ul>
<ul>
<li>Database technologies (e.g., SQL, NoSQL, Graph DB, Vector DB).</li>
</ul>
<ul>
<li>API development and integration (e.g., REST, GraphQL).</li>
</ul>
<ul>
<li>Containerization technologies (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Software development on Linux and Windows.</li>
</ul>
<ul>
<li>Demonstrable hands-on experience using GenAI tools (e.g., OpenAI Codex, Claude Code, Gemini Code Assist, GitHub Copilot, Amazon CodeWhisperer, or similar) for software development, code generation, debugging, and algorithmic exploration.</li>
</ul>
<ul>
<li>Experience with Git version control, build tools, and CI/CD pipelines.</li>
</ul>
<ul>
<li>Demonstrated understanding and application of software testing principles and practices, including unit testing, integration testing, and end-to-end testing.</li>
</ul>
<ul>
<li>Strong problem-solving skills, meticulous attention to detail, and the ability to work effectively in a collaborative team environment.</li>
</ul>
<ul>
<li>Excellent communication and interpersonal skills, with the ability to effectively articulate complex technical concepts to diverse audiences.</li>
</ul>
<ul>
<li>Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>BS in Computer Science, Engineering, or similar field.</li>
</ul>
<ul>
<li>Distributed applications development (e.g., client/server, microservices, multi-agent solutions).</li>
</ul>
<ul>
<li>High performance computing (HPC) and big data technologies (e.g., Apache Spark, Hadoop).</li>
</ul>
<ul>
<li>Mobile app development (e.g., iOS or Android).</li>
</ul>
<ul>
<li>Embedded software development experience.</li>
</ul>
<ul>
<li>Willingness to travel up to approximately 10% US</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$132,000-$198,000 USD</Salaryrange>
      <Skills>Python, C++, Java, Rust, Go, JavaScript/TypeScript, Software Architecture, AI/ML, Web App Development, Cloud Development, Data Modeling, Networking, Database Technologies, API Development, Containerization, Git Version Control, Build Tools, CI/CD Pipelines, Unit Testing, Integration Testing, End-to-End Testing, Distributed Applications Development, High Performance Computing, Big Data Technologies, Mobile App Development, Embedded Software Development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5089044007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3a17bc01-d7d</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>DBT Labs is seeking a Staff Software Engineer to join our Engineering team. As a seasoned engineer, you will architect and build the durable memory substrate that powers agentic analytics workflows. This platform stores not just metadata, but meaning: decisions, intent, rationale, and history , and makes it safely accessible to humans, agents, and applications.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Prototyping apt technical solutions and finding best fits for the context engine.</li>
<li>Architecting and building the core Context Platform.</li>
<li>Designing schemas and primitives for Decision Memory and enterprise context.</li>
<li>Owning context storage systems (graph, vector, event/time-based).</li>
<li>Building read/write/query APIs used by agents, products, and external apps.</li>
<li>Designing permission-aware, auditable context access.</li>
</ul>
<p>You will be working closely with agentic systems engineers and product leadership to ensure the context engine is interoperable, portable, and zero-lock-in by design.</p>
<p>In this role, you will own:</p>
<ul>
<li>Context schemas and schema evolution strategies.</li>
<li>Storage and data modeling choices.</li>
<li>Platform APIs and interfaces.</li>
<li>Security, identity propagation, and audit foundations.</li>
<li>Long-term scalability and correctness of context data.</li>
</ul>
<p>You will not own:</p>
<ul>
<li>Agent behavior or orchestration logic.</li>
<li>Business rules or governance policy decisions.</li>
<li>Product UI or workflow automation.</li>
</ul>
<p>The ideal candidate will have significant experience building distributed systems, data platforms, or infrastructure, and will be comfortable operating in ambiguous, greenfield problem spaces. They will also have deep expertise in data modeling and schema design, experience designing shared platforms used by many teams, and strong instincts around APIs, contracts, and backward compatibility.</p>
<p>Nice to have experience with knowledge graphs, metadata systems, or search/retrieval systems, experience building systems with governance, auditability, or compliance requirements, and familiarity with dbt or modern analytics stacks or developer tooling.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Distributed systems, Data platforms, Infrastructure, Data modeling, Schema design, APIs, Contracts, Backward compatibility, Knowledge graphs, Metadata systems, Search/retrieval systems, dbt, Modern analytics stacks, Developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4661362005</Applyto>
      <Location>India - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>584a7343-f48</externalid>
      <Title>Senior Revenue Strategy &amp; Operations Manager</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel is a digital analytics platform that helps companies understand user behavior and track company success metrics.</p>
<p>The Revenue Strategy &amp; Operations team at Mixpanel partners with Regional Business Leaders &amp; Global Leaders to help set and execute global and regional revenue strategies.</p>
<p>About the Role</p>
<p>As Senior Revenue Strategy &amp; Operations Manager, you&#39;ll support on defining the strategy behind customer retention and account growth. You&#39;ll partner closely with CS and Sales leadership to improve Gross and Net Revenue Retention, enable scalable expansion motions, and proactively identify areas to enhance the customer experience and lifetime value.</p>
<p>Responsibilities</p>
<ul>
<li>Drive strategic initiatives that shape how we engage, retain, and grow our customer base</li>
</ul>
<ul>
<li>Conduct advanced market, customer, and product analyses to uncover whitespace opportunities, inform strategic bets, and guide resource allocation</li>
</ul>
<ul>
<li>Own and evolve the reporting framework for sales and post-sales KPIs,translating insights into recommendations that shape our post-sales strategy</li>
</ul>
<ul>
<li>Act as a strategic thought partner to GTM leadership,bringing analytical rigor and business acumen to inform key decisions and go-to-market priorities</li>
</ul>
<ul>
<li>Provide high-impact operational support to the Sales organization, proactively identifying bottlenecks and implementing scalable solutions to improve performance</li>
</ul>
<ul>
<li>Run and evolve key operating cadences, including business performance deep dives, and executive-level business reviews across post-sales teams</li>
</ul>
<ul>
<li>Collaborate cross-functionally with Product, Sales, Finance, and Marketing to ensure alignment and shared ownership of customer outcomes throughout the full lifecycle</li>
</ul>
<p>We&#39;re Looking for Someone Who Has</p>
<ul>
<li>4+ years of experience at a top-tier Management or Strategy Consulting firm</li>
</ul>
<ul>
<li>2+ years of operating experience in Corporate Strategy, Business Operations, or Sales Strategy in a high-growth, fast-paced environment at a B2B SaaS organization</li>
</ul>
<ul>
<li>Experience defining and operationalizing GTM Strategy and proven experience driving impact on retention and expansion metrics (GRR, NRR, etc.)</li>
</ul>
<ul>
<li>Strong project management experience with demonstrated ability to effectively manage time, prioritize tasks, and work within deadlines</li>
</ul>
<ul>
<li>Track-record of collaborating with cross-functional partners to execute projects</li>
</ul>
<ul>
<li>Highly motivated, innate intellectual curiosity, and a strong desire to drive impact</li>
</ul>
<ul>
<li>Excellent problem-solving skills, and the ability to thrive in a fast-paced, dynamic environment</li>
</ul>
<ul>
<li>Outstanding written and oral communication skills</li>
</ul>
<p>Bonus Points For</p>
<ul>
<li>Working knowledge of SFDC including Reporting and Record Management from lead creation through opportunity closure</li>
</ul>
<ul>
<li>Proficiency in modeling and analyzing complex and large data sets</li>
</ul>
<ul>
<li>Demonstrable passion for the data analytics industry</li>
</ul>
<p>Compensation</p>
<p>The amount listed below is the total target cash compensation (TTCC) and includes base compensation and variable compensation in the form of either a company bonus or commissions. Variable compensation type is determined by your role and level. In addition to the cash compensation provided, this position is also eligible for equity consideration and other benefits including medical, vision, and dental insurance coverage.</p>
<p>Our salary ranges are determined by role and level and are benchmarked to the SF Bay Area Technology data cut released by Radford, a global compensation database. The range displayed represents the minimum and maximum TTCC for new hire salaries for the position across all of our US locations. To stay on top of market conditions, we refresh our salary ranges twice a year so these ranges may change in the future. Within the range, individual pay is determined by experience, job-related skills, qualifications, and other factors. If you have questions about the specific range, your recruiter can share this information.</p>
<p>Mixpanel Compensation Range $189,000-$231,000 USD</p>
<p>Benefits and Perks</p>
<ul>
<li>Comprehensive Medical, Vision, and Dental Care</li>
</ul>
<ul>
<li>Mental Wellness Benefit</li>
</ul>
<ul>
<li>Generous Vacation Policy &amp; Additional Company Holidays</li>
</ul>
<ul>
<li>Enhanced Parental Leave</li>
</ul>
<ul>
<li>Volunteer Time Off</li>
</ul>
<ul>
<li>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</li>
</ul>
<p>Culture Values</p>
<ul>
<li>Make Bold Bets: We choose courageous action over comfortable progress.</li>
</ul>
<ul>
<li>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience, and collective wisdom to drive powerful outcomes.</li>
</ul>
<ul>
<li>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</li>
</ul>
<ul>
<li>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</li>
</ul>
<ul>
<li>Champion the Customer: We seek to deeply understand our customers&#39; needs, ensuring their success is our north star.</li>
</ul>
<ul>
<li>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</li>
</ul>
<p>Why choose Mixpanel?</p>
<p>We&#39;re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital. Mixpanel&#39;s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics. Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service. Choosing to work at Mixpanel means you&#39;ll be helping the world&#39;s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity. At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have. We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply. We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status. Pursuant to the San Francisco Fair Chance Ordinance or other similar laws that may be applicable, we will consider for employment qualified applicants with arrest and conviction records. We&#39;ve immersed ourselves in our Culture and Values as our guiding principles for the impact we want to have and the future we are building.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$189,000-$231,000 USD</Salaryrange>
      <Skills>digital analytics, data analysis, project management, strategic planning, business operations, sales strategy, customer retention, account growth, Gross and Net Revenue Retention, expansion metrics, SFDC, Reporting and Record Management, data modeling, data visualization, data science, machine learning, cloud computing, cybersecurity</Skills>
      <Category>Operations</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a digital analytics platform that helps companies understand user behavior and track company success metrics.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7008408</Applyto>
      <Location>San Francisco, US (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>165c3a6f-f1e</externalid>
      <Title>Data Engineer, Analytics</Title>
      <Description><![CDATA[<p>We are looking for an experienced Data Engineer, Analytics to join our data team. As a Data Engineer, Analytics, you will be responsible for owning the transformation and semantic layer that turns data into clean, tested, well-documented tables and dashboards that data scientists, product managers, and business stakeholders can trust and self-serve from.</p>
<p>You will define and operationalize the metrics that inform how we identify opportunities, measure success, and make decisions. This includes designing, building, and maintaining curated analytical datasets and data models that serve as the canonical sources for metrics, dashboards, and analyses.</p>
<p>You will partner closely with data science, product managers, and engineering teams to translate business questions into well-modeled, performant, and discoverable data assets. You will execute metric workflows, from metric definition and logging schema design to data modeling and visualization, with guidance from manager and senior team members.</p>
<p>You will also build and maintain executive-level dashboards and self-serve reporting tools that enable business stakeholders to answer their own questions.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain curated analytical datasets and data models that serve as the canonical sources for metrics, dashboards, and analyses.</li>
</ul>
<ul>
<li>Partner closely with data science, product managers, and engineering teams to translate business questions into well-modeled, performant, and discoverable data assets.</li>
</ul>
<ul>
<li>Execute metric workflows, from metric definition and logging schema design to data modeling and visualization, with guidance from manager and senior team members.</li>
</ul>
<ul>
<li>Build and maintain executive-level dashboards and self-serve reporting tools that enable business stakeholders to answer their own questions.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years of experience in analytics or data engineering with a strong focus on building curated, consumer-facing datasets.</li>
</ul>
<ul>
<li>2+ years of experience in designing, developing, and maintaining robust data models from structured and unstructured sources to power a variety of use cases, including experimentation.</li>
</ul>
<ul>
<li>2+ years of experience writing accurate and effective SQL.</li>
</ul>
<ul>
<li>Fluency in Python or another programming language.</li>
</ul>
<ul>
<li>Experience building and owning executive-level dashboards and reports using BI tools (e.g., Looker, Tableau, or similar).</li>
</ul>
<ul>
<li>Strong business acumen, you will partner with data scientists and product managers to translate ambiguous questions into concrete metric definitions and data models.</li>
</ul>
<ul>
<li>Excellent communication, comfortable being the connective tissue between technical and business teams.</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Passion for Discord or online communities.</li>
</ul>
<ul>
<li>Experience building or contributing to a semantic layer or metrics store.</li>
</ul>
<ul>
<li>Experience with modern analytics and data engineering tools (dbt, BigQuery etc).</li>
</ul>
<ul>
<li>Experience implementing and monitoring audits for data quality with massive data sets (e.g. billions of rows).</li>
</ul>
<ul>
<li>Experience working on SEO, GEO, or other top of funnel growth focused features.</li>
</ul>
<ul>
<li>Experience collaborating with compliance, legal, or litigation cross-functional teams.</li>
</ul>
<p>The US base salary range for this full-time position is $160,000 to $180,000 + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $180,000 + equity + benefits</Salaryrange>
      <Skills>data engineering, analytics, SQL, Python, BI tools, data modeling, metric definition, data visualization, semantic layer, metrics store, modern analytics tools, data quality audits, SEO, GEO, compliance, legal, litigation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a platform used by over 200 million people every month for various purposes, including playing video games. It plays a significant role in the future of gaming.</Employerdescription>
      <Employerwebsite>https://discord.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8371252002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58a44dab-91a</externalid>
      <Title>Partner Solutions Architect - Japan</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across Japan. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>
<p>You will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud.</p>
<p>Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing. This is not a purely reactive enablement role. The Partner SA is expected to help shape and execute repeatable partner plays that create revenue.</p>
<p>That includes enabling partner sellers and architects, supporting account mapping and seller-to-seller engagement, helping define joint value propositions, supporting partner-led pipeline generation, and influencing product and field strategy based on what is learned in-market.</p>
<p>Internal operating docs show this motion consistently includes enablement sessions, QBR sponsorships, account planning, workshops, field events, and targeted campaigns designed to produce sourced and influenced pipeline.</p>
<p>You&#39;ll be part of a team helping dbt scale its ecosystem through better partner capability, tighter field alignment, and more repeatable pipeline generation. The role is especially important as dbt continues investing in structured partner motions and deeper engagement with major cloud and data platform partners.</p>
<p>What you&#39;ll do:</p>
<ul>
<li>Partner closely with Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>
</ul>
<ul>
<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>
</ul>
<ul>
<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>
</ul>
<ul>
<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>
</ul>
<ul>
<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>
</ul>
<ul>
<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>
</ul>
<ul>
<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>
</ul>
<ul>
<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>
</ul>
<ul>
<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>
</ul>
<ul>
<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>
</ul>
<ul>
<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>
</ul>
<ul>
<li>Travel approximately 30-40% to support partner planning, enablement, executive meetings, and field events across Japan</li>
</ul>
<p>This scope reflects how the Partner SA team is already operating: enabling partner field teams, building account-level alignment, supporting QBRs and regional events, and translating those activities into sourced and engaged pipeline.</p>
<p>What you&#39;ll need:</p>
<ul>
<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>
</ul>
<ul>
<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>
</ul>
<ul>
<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>
</ul>
<ul>
<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>
</ul>
<ul>
<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>
</ul>
<ul>
<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>
</ul>
<ul>
<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>
</ul>
<ul>
<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>
</ul>
<ul>
<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>
</ul>
<ul>
<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>
</ul>
<p>What will make you stand out:</p>
<ul>
<li>Experience working directly in partner, alliance, or ecosystem roles</li>
</ul>
<ul>
<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>
</ul>
<ul>
<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>
</ul>
<ul>
<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>
</ul>
<ul>
<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>
</ul>
<ul>
<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>
</ul>
<ul>
<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>
</ul>
<ul>
<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>
</ul>
<p>What to expect in the interview process (all video interviews unless accommodations are needed):</p>
<ul>
<li>Interview with Talent Acquisition Partner</li>
</ul>
<ul>
<li>Interview with Hiring Manager</li>
</ul>
<ul>
<li>Team Interviews</li>
</ul>
<ul>
<li>Demo Round</li>
</ul>
<p>#LI-LA1</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner engineering, customer-facing technical role, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673657005</Applyto>
      <Location>Japan - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>be766cd7-8e2</externalid>
      <Title>Staff Software Engineer, Backend (Iasi)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>
<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Engineer, Database design, System architecture, ClickHouse, Elasticsearch, Python, Go, RESTful API design, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5030292008</Applyto>
      <Location>Iasi, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e1c6866e-f9e</externalid>
      <Title>Staff Software Engineer, Backend (Cluj)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one. We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that provides a customer data platform to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5102480008</Applyto>
      <Location>Cluj, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>98161ddd-28c</externalid>
      <Title>Data Analyst III</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is a finance platform that enables companies to spend smarter and move faster in over 200 markets. It combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</p>
<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry.</p>
<p>We&#39;re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream.</p>
<p>We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p>Data at Brex</p>
<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations.</p>
<p>Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>
<p>What you&#39;ll do</p>
<p>As a senior Data Analyst (DA III), you will own the end-to-end analytics lifecycle for one or more business areas at Brex.</p>
<p>You&#39;ll go beyond building dashboards,you&#39;ll frame the right questions, design rigorous analyses, apply statistical methods, and translate your findings into clear recommendations for leadership.</p>
<p>You will also serve as a technical leader on the Data Analytics team, mentoring more junior analysts and helping define the standards and best practices that elevate the team&#39;s work.</p>
<p>This role sits at the intersection of analytics, analytics engineering, and business strategy.</p>
<p>You&#39;ll work in a modern data stack environment and partner closely with Data Scientists, Data Engineers, and senior leaders across the organization.</p>
<p>Where you&#39;ll work</p>
<p>This role will be based in our San Francisco office.</p>
<p>We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home.</p>
<p>We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday.</p>
<p>As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities</p>
<ul>
<li>Own the analytics lifecycle for assigned business areas: from problem framing and data sourcing through analysis, insight generation, and stakeholder presentation.</li>
</ul>
<ul>
<li>Build and maintain dashboards and self-service reporting tools that enable business teams to independently track performance, identify risks, and make data-driven decisions.</li>
</ul>
<ul>
<li>Write production-quality SQL and Python code to extract, transform, and analyze data at scale.</li>
</ul>
<ul>
<li>Collaborate with Data Engineers and Data Scientists to develop and maintain analytical data models, improve data pipelines, and ensure data quality across the organization.</li>
</ul>
<ul>
<li>Partner with leadership across Sales, Operations, Product, Finance, and other departments to identify high-impact analytical opportunities and deliver actionable recommendations.</li>
</ul>
<ul>
<li>Mentor other data analysts and contribute to the development of team standards, documentation, code review practices, and analytical frameworks.</li>
</ul>
<ul>
<li>Proactively identify gaps in data infrastructure, propose improvements, and contribute to the evolution of the team’s tooling and processes.</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of experience in data analytics, business intelligence, or a related quantitative role.</li>
</ul>
<ul>
<li>3+ years of experience partnering directly with Sales, Operations, Product, or equivalent business teams as an embedded analytics partner.</li>
</ul>
<ul>
<li>Advanced SQL proficiency, including CTEs, window functions, performance optimization, and working across complex data models.</li>
</ul>
<ul>
<li>Proficiency in Python for data analysis, automation, and modeling (Pandas, NumPy, scikit-learn, or similar).</li>
</ul>
<ul>
<li>Experience with cloud data warehouses, particularly Snowflake (BigQuery and Databricks also valued).</li>
</ul>
<ul>
<li>Hands-on experience with BI and data visualization tools (Looker, Tableau, Hex, or similar).</li>
</ul>
<ul>
<li>Strong stakeholder management skills,proven ability to present complex technical findings to non-technical audiences.</li>
</ul>
<ul>
<li>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</li>
</ul>
<p>Bonus points</p>
<ul>
<li>Demonstrated experience applying statistical methods to business problems (e.g., regression, classification, A/B testing).</li>
</ul>
<ul>
<li>Experience with dbt for data modeling and transformation.</li>
</ul>
<ul>
<li>Experience building and maintaining data pipelines using orchestration tools such as Airflow.</li>
</ul>
<ul>
<li>Experience working with APIs for data ingestion and integration.</li>
</ul>
<ul>
<li>Familiarity with version control systems (Git).</li>
</ul>
<ul>
<li>Experience in fintech, financial services, or payments.</li>
</ul>
<ul>
<li>Track record of leading cross-functional analytics projects from scoping through delivery.</li>
</ul>
<p>Compensation</p>
<p>The expected salary range for this role is $114,192 - $142,740.</p>
<p>However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity.</p>
<p>Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$114,192 - $142,740</Salaryrange>
      <Skills>Advanced SQL, Python, Cloud data warehouses, BI and data visualization tools, Stakeholder management, Generative AI and LLM-based tools, Statistical methods, dbt for data modeling and transformation, Orchestration tools, APIs for data ingestion and integration, Version control systems, Fintech, financial services, or payments</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a finance platform that enables companies to spend smarter and move faster in over 200 markets. It combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8463699002</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9537437b-e23</externalid>
      <Title>Staff Backend Engineer, Knowledge Graph (Rust)</Title>
      <Description><![CDATA[<p>As a Staff Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help design, scale, and operate a high-impact graph data service that underpins agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>
<p>You&#39;ll partner with a small, senior Rust-first team to ship reliable graph capabilities and make them easy for other teams and agents to use. The Knowledge Graph service is a distributed SDLC indexing system. It builds a property graph from GitLab SDLC (software development lifecycle) and code data using ClickHouse, NATS JetStream, and the Data Insights Platform. It also exposes secure graph queries and MCP tools for AI agents and product features.</p>
<p>In this role, you&#39;ll own core parts of the system end to end: shaping the architecture, hardening multi-tenant behavior and performance, and making it straightforward for other teams and agents to consume graph capabilities. In your first year, you&#39;ll take clear ownership of major areas of the service (for example, the graph query engine, SDLC indexing, or multi-tenant authorization), reduce single points of failure through better runbooks and shared context, and raise the bar on how we design, build, and operate analytical services across the stack.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading the design and evolution of core Knowledge Graph services in a production Rust codebase, including the graph query engine, SDLC and code indexing pipelines, and API/MCP surfaces that other GitLab teams and AI agents rely on.</li>
</ul>
<ul>
<li>Owning complex, cross-cutting initiatives that span GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform, from technical direction and design docs through implementation, rollout, and iteration.</li>
</ul>
<ul>
<li>Driving system design decisions that improve reliability, scalability, and maintainability for analytical (OLAP-style) graph workloads. This includes multi-hop traversals, aggregations, and multi-tenant isolation. Document trade-offs so the broader team can move quickly and stay aligned.</li>
</ul>
<ul>
<li>Defining and improving operational maturity for the service, including service level objectives (SLOs), observability, runbooks, incident response, capacity planning, and production readiness (PREP) for GitLab.com, Dedicated, and Self-Managed deployments.</li>
</ul>
<ul>
<li>Collaborating asynchronously with product, data, infrastructure, security, and AI teams to sequence work, unblock platform-level dependencies, and land features in a way that is safe for customers and sustainable for the team.</li>
</ul>
<ul>
<li>Applying AI-assisted development workflows responsibly (for example, using MCP-aware tools, Knowledge Graph-backed agents, and internal Duo tooling) and help establish practical norms for how the team uses AI while maintaining strong engineering judgment.</li>
</ul>
<ul>
<li>Mentoring and supporting other engineers through pairing, technical design reviews, and knowledge-sharing, reinforcing shared ownership of the system and its operational sustainability.</li>
</ul>
<ul>
<li>Contributing across the stack when needed, including occasional Ruby (Rails integration and authorization paths) or frontend work (for example, the Software Architecture Map UI) to close gaps and keep delivery moving.</li>
</ul>
<p>This role requires significant experience building and operating production backend systems, with a track record of owning reliability, maintainability, and on-call readiness for services that support other product teams or platforms. Strong engineering skills in Rust or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive backend codebase are essential. Additionally, strong system design skills, including making and explaining clear architectural decisions, documenting constraints, and aligning trade-offs with product and platform needs, are necessary.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, ClickHouse, NATS JetStream, Data Insights Platform, graph data modeling, query patterns, property graphs, Cypher/GQL, n-hop traversals, aggregations, multi-tenant isolation, service level objectives, observability, runbooks, incident response, capacity planning, production readiness, AI-assisted development workflows, MCP-aware tools, Knowledge Graph-backed agents, internal Duo tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8481945002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>09a4d1ce-cde</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>We are looking for an experienced Data Engineer to partner with our Data Science and Data Infrastructure teams to own and scale our data pipelines. You&#39;ll also work closely with stakeholders across business teams including sales, marketing, and finance to ensure that the data they need arrives promptly and reliably.</p>
<p>As a Data Engineer at Figma, you will be responsible for building and maintaining scalable data pipelines that connect various cloud data sources. You will develop a deep understanding of Figma&#39;s core data models and optimize data pipelines for scale. You will partner with the Data Science and Data Infrastructure teams to build new foundational data sets that are trusted, well understood, and enable self-service.</p>
<p>You will work with a wide range of cross-functional stakeholders to derive requirements and architect shared datasets; ability to document, simplify and explain complex problems to different types of audiences. You will establish best practices for the development of specialized data sets for analytics and modeling.</p>
<p>We&#39;d love to hear from you if you have:</p>
<ul>
<li>4+ years in a relevant field.</li>
<li>Fluency with both SQL and Python.</li>
<li>Familiarity with Snowflake, dbt, Dagster, and ETL/reverse ETL tools.</li>
<li>Excellent judgment and creative problem-solving skills.</li>
<li>A self-starting mindset along with strong communication and collaboration skills.</li>
</ul>
<p>While not required, it&#39;s an added plus if you also have:</p>
<ul>
<li>Knowledge in data modeling methodologies to design and build robust data architectures for insightful analytics.</li>
<li>Experience with business systems such as Salesforce, Customer IO, Stripe, NetSuite is a big plus.</li>
</ul>
<p>At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you&#39;re excited about this role but your past experience doesn&#39;t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$140,000-$348,000 USD</Salaryrange>
      <Skills>SQL, Python, Snowflake, dbt, Dagster, ETL/reverse ETL tools, data modeling methodologies, business systems such as Salesforce, Customer IO, Stripe, NetSuite</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Figma</Employername>
      <Employerlogo>https://logos.yubhub.co/figma.com.png</Employerlogo>
      <Employerdescription>Figma is a design and collaboration platform that helps teams bring ideas to life. It was founded in 2012 and has grown to become a leading player in the design and collaboration space.</Employerdescription>
      <Employerwebsite>https://www.figma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/figma/jobs/5220003004</Applyto>
      <Location>San Francisco, CA • New York, NY • United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dbd787ee-1b3</externalid>
      <Title>Senior Software Engineer - Security Platform Team</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer on the Security Platform Delivery and Success team, you will help design and build workflows that make it easier for customers to understand how AI is helping them save time and money. You&#39;ll collaborate closely with other security engineering teams, as well as with product and design, to deliver resilient, high-scale features used by security practitioners around the world.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Designing and implementing features that power Value Reports, AI-powered Inboxes, and Guided onboarding for all of the Security products.</li>
<li>Building and evolving APIs that correlate entities, findings, signals, and configuration data into coherent security stories.</li>
<li>Developing scalable, high-performance systems within the Elastic ecosystem and cloud-native environments.</li>
<li>Owning the reliability, observability, and operational health, from design to production.</li>
<li>Collaborating with product, design, and other engineering teams to refine requirements and deliver impactful, user-centered solutions.</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>Solid programming experience in JavaScript/TypeScript, with a particular emphasis on React.js and Node.js frameworks.</li>
<li>Experience designing and implementing APIs, data models, and services that support rich application workflows.</li>
<li>Ability to take ownership of problems end-to-end: from clarifying requirements and proposing designs to delivering, monitoring, and iterating in production.</li>
<li>Comfort working in a distributed, async-first environment and collaborating with colleagues across time zones.</li>
<li>A proactive mindset: you ask the right questions, challenge assumptions, and look for ways to improve both product and engineering processes.</li>
</ul>
<p>Bonus points for familiarity with Kibana or Elasticsearch, and contributions to open source.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$128,300-$203,000 CAD</Salaryrange>
      <Skills>JavaScript, TypeScript, React.js, Node.js, API design, data modeling, service development, Kibana, Elasticsearch, open source contributions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic, the Search AI Company</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. Their Search AI Platform, used by more than 50% of the Fortune 500, brings together the precision of search and the intelligence of AI.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7310546</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86696218-8f0</externalid>
      <Title>Staff Backend Engineer (Ruby on Rails/AI), Verify</Title>
      <Description><![CDATA[<p>As a Staff Backend Engineer (AI) in the Verify stage at GitLab, you&#39;ll help shape and scale the core infrastructure behind GitLab CI. You&#39;ll play a central role in how we integrate AI into CI/CD workflows. Your work will impact performance, reliability, and usability for people running millions of CI jobs, from small teams to the largest enterprises.</p>
<p>In this role, you&#39;ll go beyond using AI tools and help define how we design, build, and iterate on AI-assisted and agentic CI experiences. You&#39;ll set standards for what good looks like across our AI agent portfolio, including how we measure success, how we instrument behavior in production, and how we account for large language model limitations. You&#39;ll also help responsibly integrate GitLab&#39;s Duo Agent Platform into CI workflows at scale, on a foundation that&#39;s fast, reliable, secure, and observable.</p>
<p>We have ambitious goals for Agentic CI in FY27. As a Staff Engineer, you will:</p>
<ul>
<li>Partner with Engineering, Product, and UX leadership to pressure-test our priorities: where we can move faster, where we&#39;re missing data, and where there&#39;s whitespace to innovate. Part of this includes learning and growing with the Engineering team you will collaborate closely with.</li>
</ul>
<ul>
<li>Define what success looks like across our agent portfolio and make sure we&#39;re tracking against it , not just shipping, but learning.</li>
</ul>
<ul>
<li>Bring a sharp eye to the competitive landscape, helping us understand what it takes to keep GitLab CI best-in-class in an increasingly agentic world.</li>
</ul>
<p>Examples of Agentic CI work we have planned for the upcoming year:</p>
<ul>
<li>AI Pipeline Builder, the foundational CI agent that auto-creates pipelines for new projects and serves as the launchpad for onboarding new CI users.</li>
</ul>
<ul>
<li>Automate the Fix a Failing Pipeline flow at scale – from dogfooding on internal GitLab projects through to safe, controlled rollout for customers, solving real infrastructure and scalability challenges.</li>
</ul>
<ul>
<li>Build the instrumentation and observability layer that makes agentic CI trustworthy , trigger volume dashboards, retry rates, cost safeguards , so we can measure what&#39;s working, catch what isn&#39;t, and iterate with confidence.</li>
</ul>
<ul>
<li>Harden the CI pipeline execution infrastructure that these agents depend on: database access patterns, background processing, and job orchestration built to handle the additional load that AI-driven automation introduces at enterprise scale.</li>
</ul>
<p>You&#39;ll shape and scale GitLab CI backend infrastructure to improve performance, reliability, and usability for users running jobs at high volume. You&#39;ll design and implement AI-powered features for Agentic CI, including agents, agentic flows, and LLM-backed tooling that integrates with GitLab&#39;s Duo Agent Platform. You&#39;ll define what success looks like for AI in CI before you build, including baselines, measurable outcomes, and clear signals that help the team learn and iterate. You&#39;ll build the instrumentation and observability needed to make AI-assisted CI trustworthy in production, including feature behavior metrics, dashboards, and safeguards. You&#39;ll own and drive measurable performance improvements across CI systems (for example, database access patterns, background processing, and job orchestration) by forming hypotheses, running experiments, and validating results with data. You&#39;ll write secure, well-tested, maintainable Ruby on Rails code in a large monolith, improving existing features while reducing technical debt and operational risk. You&#39;ll lead cross-functional technical work with Product, UX, and Infrastructure, influencing architecture and execution across the Verify stage. You&#39;ll share standards, patterns, and learnings with other engineers, raising the bar for responsible AI integration and evidence-driven engineering across CI.</p>
<p>This role requires advanced proficiency with Ruby and Ruby on Rails, with experience building and maintaining reliable backend services in a large codebase. You should have strong PostgreSQL skills, including data modeling, query tuning, and scaling large tables through proactive performance investigation and remediation. You should have hands-on experience building, running, and debugging high-traffic production systems, ideally in CI, workflow orchestration, or adjacent infrastructure-heavy domains. You should have practical experience designing and shipping AI-powered backend features and integrations, including sound judgment about large language model limitations and responsible use in production. You should have a data-driven approach to engineering: defining hypotheses, establishing baseline metrics, instrumenting changes, and measuring outcomes against clear success criteria. You should have familiarity with observability patterns and tools (metrics, logging, tracing) to diagnose issues, improve reliability, and guide iteration. You should have strong backend architecture and delivery practices, including secure design, well-tested code, and strategies for safe rollouts and zero-downtime changes. You should have clear written and verbal communication skills, including writing technical proposals and documentation, and collaborating effectively in a remote, asynchronous, cross-functional environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Ruby on Rails, PostgreSQL, Data modeling, Query tuning, Scaling large tables, High-traffic production systems, CI, Workflow orchestration, Infrastructure-heavy domains, AI-powered backend features, Large language model limitations, Responsible use in production, Data-driven approach to engineering, Observability patterns, Metrics, Logging, Tracing, Backend architecture, Delivery practices, Secure design, Well-tested code, Safe rollouts, Zero-downtime changes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8448283002</Applyto>
      <Location>Remote, APAC; Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US; Remote, US-Southeast</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1be89b3c-bc1</externalid>
      <Title>Staff Analytics Engineer</Title>
      <Description><![CDATA[<p>We are currently hiring for multiple teams:</p>
<p>Foundational Data team: Our mission in the Foundational Data team is to build and maintain high-quality datasets frequently used across all of Airbnb. We set company-wide standards that decide how locations are grouped into regions, visitors are measured based upon site traffic, bot traffic is separated from organic traffic, and cloud costs are attributed to Airbnb services. This data is used to build public financial reports, drive strategic marketing decisions, and manage operational costs.</p>
<p>AirCover Data Foundation: The AirCover Data Foundation team is responsible for providing trustworthy, consistent data and metrics to facilitate business insights, informed decision-making, and seamless operations across Airbnb&#39;s AirCover programs, such as Guest Travel Insurance, AirCover for Hosts, and AirCover for Guests.</p>
<p>As a Staff Analytics Engineer, you will bring a unique lens to our data strategy and provide in-depth technical mentorship and leadership to the team. We are looking for someone with expertise in data modeling, metric development, and large-scale distributed data processing frameworks like Presto or Spark.</p>
<p>Leveraging our internal, top-tier data tooling alongside other resources, you will empower both technical and non-technical teams across Airbnb to utilize our data for making decisions grounded in evidence. Staff-level engineers are expected to do this with a minimal amount of supervision. We value innovative thinkers who consistently seek smarter and more efficient solutions while managing daily operations, deadlines, and collaborating with team members.</p>
<p>A Typical Day:</p>
<ul>
<li>Develop high-quality data assets to satisfy a wide range of use-cases</li>
<li>Develop frameworks and tools to scale insight generation to meet critical business and infrastructure requirements</li>
<li>Collaborate and build strong partnerships with other data practitioners throughout Airbnb</li>
<li>Influence the trajectory of data in decision making</li>
<li>Improve trust in our data by championing for data quality across the stack</li>
</ul>
<p>Your Expertise:</p>
<ul>
<li>9+ years of experience with a BS/Masters or 6+ years with a PhD</li>
<li>Fluent in SQL and proficient in at least one data engineering language, such as Python or Scala</li>
<li>Expertise using business intelligence and reporting tools like Superset and Tableau</li>
<li>Expertise in large-scale distributed data processing frameworks like Presto or Spark</li>
<li>Expertise in data modeling for data warehouses and/or metrics repositories</li>
<li>Experience with an ETL framework like Airflow</li>
<li>Clear and mature communication skills: ability to distill complex ideas for technical and non-technical stakeholders</li>
<li>Ability to provide technical leadership and mentorship, guiding teams on best practices and contributing to the development of analytic engineering strategies</li>
<li>Experience exploring and leveraging LLM AI’s in everyday tasks (coding, documentation, etc…)</li>
<li>Strong capability to forge trusted partnerships across working teams</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Scaling data tasks via automation</li>
<li>Previous experience in large-scale cloud-based software engineering or system architecture</li>
<li>Experience with AB experimentation</li>
<li>Familiarity with AI/ML algorithms, including their dependencies on data, as well as their respective strengths and limitations</li>
<li>Designing and/or leveraging high-quality data visualization tools</li>
</ul>
<p>Your Location: This position is US - Remote Eligible. The role may include occasional work at an Airbnb office or attendance at offsites, as agreed to with your manager. While the position is Remote Eligible, you must live in a state where Airbnb, Inc. has a registered entity. Click here for the up-to-date list of excluded states.</p>
<p>Our Commitment To Inclusion &amp; Belonging: Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions. All qualified individuals are encouraged to apply. We strive to also provide a disability inclusive application and interview process. If you are a candidate with a disability and require reasonable accommodation in order to submit an application, please contact us at: reasonableaccommodations@airbnb.com.</p>
<p>How We&#39;ll Take Care of You: Our job titles may span more than one career level. The actual base pay is dependent upon many factors, such as: training, transferable skills, work experience, business needs and market demands. The base pay range is subject to change and may be modified in the future. This role may also be eligible for bonus, equity, benefits, and Employee Travel Credits. Pay Range $194,000-$240,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$194,000-$240,000 USD</Salaryrange>
      <Skills>SQL, Python, Scala, Presto, Spark, Superset, Tableau, ETL, Airflow, Data Modeling, Data Warehousing, Metrics Repositories, LLM AI, AI/ML Algorithms, Data Visualization, Cloud-Based Software Engineering, System Architecture, AB Experimentation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most well-known travel companies in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7733495</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae53b8d4-8fd</externalid>
      <Title>Sr. AI Engineer, Application Engineering</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Sr. AI Engineer to join our IT department to manage AI agentic deployments and deliver real impact for our customers. As a Sr. AI Engineer, you will design and develop tailored solutions using the Elastic Agent Builder platform and related technologies, guide technical engagements, and support the growth of junior engineers.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Technical Delivery and Implementation: Own the end-to-end technical delivery of AI solutions, including writing code, configuring systems, and resolving issues, while reviewing the work of junior team members to ensure quality deployment and measurable business impact.</li>
</ul>
<ul>
<li>AI Solution Development: Take ownership of designing and implementing scalable production systems, including AI and Large Language Model (LLM) based intelligent agents and automated workflows built on the Salesforce platform.</li>
</ul>
<ul>
<li>Custom Agentic AI Engineering: Work directly with stakeholders to design and build custom intelligent agents using the Elastic Agent Builder platform, ensuring solutions meet unique business requirements and integrate smoothly with existing tool ecosystems.</li>
</ul>
<ul>
<li>Data Configuration and Integration: Own the full data lifecycle, from data model design to building efficient processing pipelines and establishing integration strategies. Ensure data is optimized and secure for AI applications, including in complex enterprise environments.</li>
</ul>
<ul>
<li>Technical Problem Solving: Identify, analyze, and resolve technical challenges across all phases of solution delivery, from data integration to model deployment and agent orchestration. Serve as a reliable resource for unblocking progress.</li>
</ul>
<ul>
<li>Agentic Innovation: Develop expertise in the Elastic platform, pushing its capabilities forward. Lead the development of custom intelligent agents, automate business processes, and shape user experiences. Insights from the field will directly influence product enhancements and platform direction.</li>
</ul>
<ul>
<li>Client Partnership: Embed with client teams to understand their operational challenges and goals. Translate requirements into clear technical designs, build strong relationships, and serve as a trusted technical advisor.</li>
</ul>
<ul>
<li>Debugging and Root Cause Analysis: Perform thorough analysis, debugging, and root cause identification for complex system interactions, data flows, and AI model behaviors to optimize performance and prevent recurring issues.</li>
</ul>
<ul>
<li>Prototyping and Iteration: Rapidly develop proofs-of-concept and minimum viable products, often coding alongside client teams to demonstrate capabilities and gather feedback for iterative refinement.</li>
</ul>
<ul>
<li>Engineering Best Practices: Apply and promote standards for code quality, scalability, security, and maintainability across all deployed solutions.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>At least 5 years&#39; experience in a hands-on, end-to-end delivery role for scalable production solutions in a professional environment</li>
</ul>
<ul>
<li>Expert-level proficiency in one or more programming languages (e.g., JavaScript, Java, Python)</li>
</ul>
<ul>
<li>Extensive experience building and deploying solutions with AI/LLM technologies, including integrating LLMs, applying AI orchestration frameworks (e.g., LangChain, LlamaIndex), prompt engineering techniques, and agentic frameworks</li>
</ul>
<ul>
<li>Deep expertise in data modeling, processing, integration, and analytics, with proficiency in enterprise data platforms (e.g., Salesforce Data Cloud, Snowflake, Databricks, BigQuery)</li>
</ul>
<ul>
<li>Strong collaboration, communication, and presentation skills, both written and verbal, with the ability to explain complex technical concepts to technical and non-technical partners</li>
</ul>
<ul>
<li>Track record of leading technical engagements, mentoring junior team members, and taking responsibility for technical aspects of projects</li>
</ul>
<p>This role is eligible to participate in Elastic&#39;s stock program and has a competitive salary range of $94,300-$149,200 USD, with an alternate range of $113,300-$179,200 USD in select locations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$94,300-$149,200 USD</Salaryrange>
      <Skills>JavaScript, Java, Python, AI/LLM technologies, LangChain, LlamaIndex, prompt engineering techniques, agentic frameworks, data modeling, processing, integration, analytics, Salesforce Data Cloud, Snowflake, Databricks, BigQuery</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that enables users to find answers in real-time using all their data, at scale. They provide a cloud-based platform for search, security, and observability.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7722032</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>760c3e88-e35</externalid>
      <Title>Senior Product Manager, Data</Title>
      <Description><![CDATA[<p>Job Title: Senior Product Manager, Data</p>
<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>
<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>
<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>
<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>
<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>
<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>
<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>
<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>
<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>
<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>
<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>
<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>
<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>
<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>
<li>Awareness of data security, compliance, and governance best practices</li>
<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>Salary Range: $143,000 to $210,000</p>
<p>Benefits:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Workplace:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$143,000 to $210,000</Salaryrange>
      <Skills>data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud-based platform that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649824006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3168d7d3-70b</externalid>
      <Title>Partner Solutions Architect - North America</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across North America. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>
<p>As a Partner Solutions Architect, you will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud. Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing.</p>
<p>Responsibilities</p>
<ul>
<li>Partner closely with North America Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>
<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>
<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>
<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>
<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>
<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>
<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>
<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>
<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>
<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>
<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>
<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>
<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>
<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>
<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>
<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>
<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>
<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>
<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>
<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>
</ul>
<p>What will make you stand out</p>
<ul>
<li>Experience working directly in partner, alliance, or ecosystem roles</li>
<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>
<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>
<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>
<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>
<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>
<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>
<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation (and yes we use it!)</li>
<li>Pension coverage</li>
<li>Excellent healthcare</li>
<li>Paid Parental Leave</li>
<li>Wellness stipend</li>
<li>Home office stipend, and more!</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner development, field engineering, sales engineering, consulting, partner engineering, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a software company that provides an analytics engineering platform used by over 90,000 teams every week, driving data transformations and AI use cases. As of February 2025, they have surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673630005</Applyto>
      <Location>Canada - Remote; US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b7b8d06f-881</externalid>
      <Title>Backend Engineer, Knowledge Graph (Rust)</Title>
      <Description><![CDATA[<p>As an Intermediate Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help build and operate a graph data service that supports GitLab Duo agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>
<p>You&#39;ll join a small, Rust-first team that values clear ownership, thoughtful system design, and rigorous thinking about data and reliability. The Knowledge Graph service is a Rust backend that builds a property graph from GitLab’s software development lifecycle (SDLC) and code data. It uses ClickHouse, NATS JetStream, and the Data Insights Platform. It exposes secure graph queries and MCP tools used by AI agents and product features.</p>
<p>In this role, you’ll deliver features and improvements in well-scoped areas, learn the broader architecture, and contribute to reliability, observability, and operational readiness. In your first year, you’ll take clear ownership of specific components or features (for example, parts of the SDLC indexing pipeline or query paths). You’ll help reduce single points of failure with better tests and runbooks, and you’ll help the team ship analytical services that are easier to maintain and evolve over time.</p>
<p>Responsibilities:</p>
<ul>
<li>Implement and iterate on backend features in the Rust-based Knowledge Graph service, including changes to the query engine, SDLC and code indexing flows, and API endpoints (including MCP endpoints) under guidance from senior and staff engineers.</li>
</ul>
<ul>
<li>Help maintain integrations between Knowledge Graph and the rest of the GitLab platform, working in areas that touch GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform.</li>
</ul>
<ul>
<li>Contribute to system design discussions by proposing options, raising questions, and documenting decisions, with a focus on reliability, scalability, and maintainability for analytical graph workloads.</li>
</ul>
<ul>
<li>Improve the operational maturity of the service by adding or enhancing metrics, logging, runbooks, alerts, and small readiness tasks, and by participating in on-call rotation as appropriate for your level and experience.</li>
</ul>
<ul>
<li>Collaborate asynchronously with product, data, infrastructure, security, and AI counterparts to clarify requirements, align on scope, and ship features safely for customers and sustainably for the team.</li>
</ul>
<ul>
<li>Use AI-assisted development workflows responsibly (for example, using Knowledge Graph-backed agents and internal Duo tooling), and share what works with the team while keeping a strong focus on code quality and correctness.</li>
</ul>
<ul>
<li>Participate in code reviews, knowledge-sharing sessions, and pairing to both learn from others and help maintain consistent standards across the codebase.</li>
</ul>
<ul>
<li>Contribute across the stack when needed, including occasional Ruby work for Rails integration and authorization paths, or small frontend changes related to Knowledge Graph features (for example, Software Architecture Map UI plumbing).</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Professional experience building and maintaining backend systems in production, with an understanding of reliability, maintainability, and how to support services over time (incident responses, and follow-ups, etc).</li>
</ul>
<ul>
<li>Proficiency in at least one modern backend language and strong interest in Rust, with either prior Rust experience or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive codebase.</li>
</ul>
<ul>
<li>Some exposure to distributed data or analytics systems (for example, OLAP databases, Kafka- or NATS-style messaging, or change data capture (CDC) pipelines), or strong motivation to develop those skills in this role.</li>
</ul>
<ul>
<li>Interest in graph data modeling and query patterns (property graphs, multi-step (n-hop) traversals, aggregations), and willingness to learn the tools and concepts used in Knowledge Graph over time.</li>
</ul>
<ul>
<li>Practical experience (or strong interest) using AI tools in day-to-day development, along with a thoughtful approach to validating outputs and integrating AI into your workflow.</li>
</ul>
<ul>
<li>A language-agnostic mindset and evidence that you can pick up new languages and frameworks as needed (for example, Ruby, Go, or TypeScript/Vue where the work touches adjacent systems).</li>
</ul>
<ul>
<li>Solid fundamentals in system design for your level, including the ability to reason about trade-offs, ask good questions, and align your implementation work with documented architectural decisions.</li>
</ul>
<ul>
<li>Comfort working in a low-process, high-ownership environment where you take responsibility for your work, communicate progress clearly, and help refine problem statements with your teammates.</li>
</ul>
<ul>
<li>Strong written communication and comfort collaborating asynchronously across time zones in an all-remote team.</li>
</ul>
<p>About the team:</p>
<p>We sit within the Data Engineering organization. We&#39;re a small group of senior engineers and we work closely with partners across AI (Duo Agent Platform), analytics, infrastructure and delivery, and security because our work spans many parts of the platform. We collaborate asynchronously and optimize for strong ownership rather than a feature factory model. We each build a meaningful understanding of the system and help evolve it over time. A key challenge for us right now is scaling sustainably. That includes hardening multi-tenant behavior, maturing observability and readiness, and keeping the system healthy and maintainable as usage grows and team members take time off. At the same time, we&#39;re bringing Knowledge Graph to general availability (GA).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$98,000-$210,000 USD</Salaryrange>
      <Skills>Rust, backend systems, reliability, maintainability, distributed data, analytics systems, graph data modeling, query patterns, AI tools, system design, low-process, high-ownership</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8437754002</Applyto>
      <Location>Remote, Canada; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>41c3ee08-08e</externalid>
      <Title>Optimization Software Engineer</Title>
      <Description><![CDATA[<p>We are looking for a talented mid-level Software Engineer with a strong background in optimization to join our growing team at Anduril Labs. In this role, you will be instrumental in developing advanced algorithms and software solutions to tackle complex, multi-domain optimization problems critical to national defense and Anduril&#39;s autonomous systems.</p>
<p>The ideal candidate possesses deep expertise in classical optimization algorithms, robust Python programming skills, and a solid foundation in data modeling. Experience with developing hybrid quantum optimization solutions is a plus.</p>
<p>You will leverage state-of-the-art, GenAI-powered development tools such as Claude Code to accelerate solution development and enhance our optimization software. This role demands creative problem-solving, a self-starter mentality, and the ability to rapidly apply algorithmic theory and mathematic modeling to practical, real-world optimization challenges.</p>
<p>You will be designing, implementing, and deploying optimization algorithms and services that integrate seamlessly into larger defense systems, working across various platforms (on-prem, cloud, and hybrid quantum computing environments).</p>
<p>Familiarity with modeling linear and non-linear optimization problems, rapid prototyping, integrating optimization solutions into existing architectures, leveraging APIs, and utilizing open-source tools will be crucial.</p>
<p>If you thrive in a dynamic environment that values creative problem-solving, love writing code, excel as both an individual contributor and team player, are eager to learn, and bring a can-do attitude, this role is for you.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, develop, and implement highly efficient optimization algorithms and software solutions to solve challenging problems in areas such as resource allocation, scheduling, routing, mission planning, control systems, and supply chain logistics.</li>
</ul>
<ul>
<li>Apply classical optimization techniques (e.g., linear programming, mixed-integer linear programming, combinatorial optimization, network flow, dynamic programming, heuristics, metaheuristics) to model and explore novel approaches.</li>
</ul>
<ul>
<li>Utilize GenAI tools (e.g., OpenAI Codes, Claude Code, GitHub Copilot) to rapidly prototype, refine, and test algorithmic solutions, improving development velocity and code quality.</li>
</ul>
<ul>
<li>Develop robust data models and efficient data pipelines to support complex optimization problems, ensuring data integrity and efficient processing for algorithmic inputs and outputs.</li>
</ul>
<ul>
<li>Collaborate with multidisciplinary teams (software engineers, data scientists, domain experts, product managers) to integrate optimization engines and services into larger defense systems and platforms.</li>
</ul>
<ul>
<li>Perform rigorous testing, validation, and performance analysis of optimization solutions, ensuring scalability, reliability, and accuracy under diverse operational conditions.</li>
</ul>
<ul>
<li>Participate actively in the entire Software Development Lifecycle (SDLC) from requirements gathering and design to deployment, monitoring, and maintenance.</li>
</ul>
<ul>
<li>Support Anduril- and customer-funded R&amp;D efforts, contributing to technical documentation, presentations, and patent applications.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Software Engineering, Applied Mathematics, Operations Research, or a related quantitative field.</li>
</ul>
<ul>
<li>3+ years of professional experience in software development with a dedicated focus on optimization, algorithmic problem-solving, or operations research.</li>
</ul>
<ul>
<li>Experience solving optimization problems in defense, transportation, supply chain, logistics, network optimization, smart grids or similar.</li>
</ul>
<ul>
<li>Expert proficiency in Python for scientific computing and robust software development.</li>
</ul>
<ul>
<li>Strong theoretical and practical understanding of classical optimization algorithms (e.g., linear programming, mixed-integer linear programming, constraint programming, network flow, dynamic programming, heuristics, meta heuristics).</li>
</ul>
<ul>
<li>Hands-on experience with optimization libraries and commercial/open-source solvers (e.g., SciPy Optimize, PuLP, CVXPY, Gurobi, CPLEX, OR-Tools, GEKKO).</li>
</ul>
<ul>
<li>Solid experience with data modeling, data structures, and algorithms to efficiently prepare, process, and manage data for optimization problems.</li>
</ul>
<ul>
<li>Demonstrable hands-on experience using GenAI tools (e.g., OpenAI Codex, Claude Code, Gemini Code Assist, GitHub Copilot, Amazon CodeWhisperer, or similar) for software development, code generation, debugging, and algorithmic exploration.</li>
</ul>
<ul>
<li>Proficiency in using numerical computing libraries such as NumPy, SciPy, and Pandas.</li>
</ul>
<ul>
<li>Demonstrated understanding and application of software testing principles and practices, including unit testing, integration testing, and end-to-end testing.</li>
</ul>
<ul>
<li>Ability to develop, test, and deploy software effectively on Linux-based systems.</li>
</ul>
<ul>
<li>Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance.</li>
</ul>
<ul>
<li>Experience with Git version control, build tools, and CI/CD pipelines.</li>
</ul>
<ul>
<li>Strong problem-solving skills, meticulous attention to detail, and the ability to work effectively in a collaborative team environment.</li>
</ul>
<ul>
<li>Excellent communication and interpersonal skills, with the ability to effectively articulate complex technical concepts to diverse audiences.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s or Ph.D. in Computer Science, Applied Mathematics, Operations Research, or a closely related quantitative field.</li>
</ul>
<ul>
<li>Familiarity with or a strong interest in quantum optimization algorithms, quantum computing concepts, or quantum-inspired heuristic approaches.</li>
</ul>
<ul>
<li>Experience with D-Wave’s quantum annealing platform is a plus.</li>
</ul>
<ul>
<li>Experience with performance-critical programming languages such as C++ or Java.</li>
</ul>
<ul>
<li>Experience with cloud platforms (e.g., AWS, Azure, GCP) for deploying scalable optimization solutions or high-performance computing (HPC) environments.</li>
</ul>
<ul>
<li>Prior experience in defense, aerospace, logistics, supply chain management, robotics, or manufacturing optimization domains.</li>
</ul>
<ul>
<li>Familiarity with integrating machine learning models with optimization techniques (e.g., prescriptive analytics, reinforcement learning for optimization).</li>
</ul>
<ul>
<li>Excellent communication skills with the ability to articulate complex technical concepts, present findings, and influence technical direction across diverse teams.</li>
</ul>
<ul>
<li>Willingness to travel up to approximately 10%.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$132,000-$198,000 USD</Salaryrange>
      <Skills>Python, Classical optimization algorithms, Data modeling, GenAI tools, Optimization libraries, Commercial/open-source solvers, Numerical computing libraries, Software testing principles, Linux-based systems, Git version control, Build tools, CI/CD pipelines, Quantum optimization algorithms, Quantum computing concepts, Quantum-inspired heuristic approaches, Performance-critical programming languages, Cloud platforms, High-performance computing environments, Machine learning models, Prescriptive analytics, Reinforcement learning for optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that develops advanced technology for the U.S. and allied military.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5089067007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5a29684d-d2d</externalid>
      <Title>Senior Analytics Developer - Platform Analytics</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Analytics Engineer to join our Platform Analytics team. In this role, you&#39;ll design and evolve core analytical data models that power trusted, self-service analytics across Elastic. You&#39;ll shape the underlying structure of our analytics layer,aligning definitions, improving usability, and enabling faster, more reliable insights for teams across the company.</p>
<p>This role goes beyond delivering within existing patterns. You&#39;ll improve foundational modeling decisions, reducing rework, and establishing standards that scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build core analytical data models in BigQuery using dbt</li>
<li>Refactor and restructure existing models to improve clarity, consistency, and ease of use</li>
<li>Partner directly with solution teams to translate business needs into well-defined, reusable data models</li>
<li>Define and enforce modeling standards, conventions, and layer contracts</li>
<li>Standardize identifiers and business logic early in the transformation layer to reduce downstream complexity</li>
<li>Centralize shared business rules and definitions to enable consistent, trusted analytics</li>
<li>Explore and apply AI-assisted approaches, to improve analytics workflows</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong expertise in Python and SQL and analytics data modeling</li>
<li>5+ years of experience in analytics engineering, data engineering, or a related role</li>
<li>Hands-on experience designing analytics layers in BigQuery and dbt</li>
<li>Proven ability to create analyst-friendly data models with clear structure and predictable behavior</li>
<li>Experience setting standards and influencing how data is modeled and consumed across teams</li>
<li>Strong analytical thinking and problem-solving skills</li>
<li>Clear written and verbal communication skills</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience working in a distributed or remote-first environment</li>
<li>Familiarity with metric definitions, or semantic layers</li>
<li>Experience applying AI or automation to analytics or data modeling workflows</li>
</ul>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component. The typical starting salary range for new hires in this role is $128,300-$203,000 CAD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$128,300-$203,000 CAD</Salaryrange>
      <Skills>Python, SQL, analytics data modeling, BigQuery, dbt, AI-assisted approaches, metric definitions, semantic layers, AI or automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic, the Search AI Company</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7614524</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a3dc5a7-8d9</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Analytics Engineer to support the Enterprise by building reliable, well-modeled, and trusted data for reporting, decision-making, and emerging AI use cases.</p>
<p>As a Senior Analytics Engineer, you will design scalable data models, define consistent business logic, and help establish a strong semantic foundation that enables both human analytics and machine-driven intelligence.</p>
<p>You will partner closely with Finance, People and Company Operations stakeholders, Data Analysts, and Data Engineers to ensure data is accurate, consistent, and easy to consume; whether through dashboards, self-service exploration, or AI-powered workflows.</p>
<p>Responsibilities:</p>
<p>Data Modeling &amp; Semantics</p>
<ul>
<li>Design, build, and maintain scalable data models using dbt and Snowflake</li>
<li>Define and standardize core Finance, HR and Enterprise level metrics (e.g., revenue, ARR, billing, Attrition, Executive Insights, Security) with clear, governed logic</li>
<li>Establish consistent modeling patterns, naming conventions, and semantic clarity across datasets</li>
<li>Contribute to a shared semantic layer that supports both analytics and AI use cases</li>
</ul>
<p>AI-Ready Data &amp; Snowflake Ecosystem</p>
<ul>
<li>Prepare high-quality, well-governed datasets for use with Snowflake Cortex and Snowflake Intelligence</li>
<li>Enable structured data foundations that support LLM-powered use cases, semantic querying, and intelligent applications</li>
<li>Ensure data is context-rich, well-documented, and aligned with business meaning to improve AI accuracy and trust</li>
</ul>
<p>Data Quality, Governance &amp; Trust</p>
<ul>
<li>Implement robust testing, validation, and documentation practices in dbt</li>
<li>Ensure consistency across reports and dashboards through shared definitions and reusable models</li>
<li>Apply data governance best practices, including access controls, lineage, and auditability</li>
<li>Partner across teams to establish clear ownership and accountability for data assets</li>
</ul>
<p>Collaboration &amp; Delivery</p>
<ul>
<li>Partner with Finance, Analysts, and cross-functional stakeholders to translate business needs into data solutions</li>
<li>Support self-service analytics by building intuitive, reusable datasets</li>
<li>Contribute to scalable data workflows that balance immediate business needs with long-term maintainability</li>
<li>Work within an agile environment, contributing to planning, prioritization, and continuous improvement</li>
</ul>
<p>AI and Data Mindset</p>
<ul>
<li>Demonstrate an AI-first mindset, thinking beyond data models and dashboards to how data can power intelligent systems and decision-making</li>
<li>Understand the importance of well-modeled, well-documented, and semantically clear data for AI and LLM-based use cases</li>
<li>A level of comfort leveraging AI-assisted workflows to improve productivity, code quality, and consistency</li>
<li>Curiosity for emerging capabilities in platforms like Snowflake Cortex and Snowflake Intelligence, and how they can be applied to Enterprise analytics</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5–8+ years of experience in Analytics Engineering, Data Engineering, or similar roles</li>
<li>Strong SQL skills and experience building analytics-ready data models</li>
<li>Mentorship &amp; Engineering Excellence: Mentorship, raising the technical bar, establishing organization-wide standards for dbt/SQL quality and CI/CD</li>
<li>Hands-on experience with dbt and Snowflake or other ETL, Modeling and database platforms</li>
<li>Solid understanding of data modeling principles, including dimensional modeling and semantic design</li>
<li>Ability to navigate highly ambiguous business challenges, translating vague, complex, or competing goals from executive stakeholders into clear, actionable, and robust data solutions</li>
<li>Experience translating business requirements into clear, maintainable data logic</li>
<li>Familiarity with SaaS metrics and Finance and People data (e.g., ARR, revenue recognition, billing, attrition etc.)</li>
<li>Experience with data quality, testing, and documentation best practices</li>
<li>Exposure to Python, R, or data processing frameworks (e.g., PySpark) is a plus</li>
<li>Experience with BI tools such as Tableau or Looker</li>
<li>Strong communication skills and ability to work across technical and business teams</li>
</ul>
<p>What you can look forward to as an Okta employee!</p>
<ul>
<li>Amazing Benefits</li>
<li>Making Social Impact</li>
<li>Fostering Diversity, Equity, Inclusion and Belonging at Okta</li>
<li>Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, Snowflake, SQL, data modeling, dimensional modeling, semantic design, ETL, data quality, testing, documentation, Python, R, PySpark, Tableau, Looker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7818510</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>25f010f0-7d1</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Brex’s AI-native automation and world-class service eliminate manual expense and accounting tasks for customers so they can focus on what matters most. Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry. We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream. We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p>Data at Brex</p>
<p>Our Scientists and Engineers work together to make data , and insights derived from data , a core asset across Brex. But it&#39;s more than just crunching numbers. The Data team at Brex develops infrastructure, statistical models, and products using data. Our work is ingrained in Brex&#39;s decision-making process, the efficiency of our operations, our risk management policies, and the unparalleled experience we provide our customers.</p>
<p>What You’ll Do</p>
<p>As a Data Engineer at Brex, you will be a core contributor in transforming raw data into actionable insights for various departments across the organization. You&#39;ll collaborate closely with Data Scientists, Software Engineers, and business units to create efficient data models, pipelines, and analytics frameworks that drive the business forward. You also play a leading role in the design, implementation, and maintenance of Core Data tables, our high-quality, curated data source for a wide range of analytic applications.</p>
<p>Where you’ll work</p>
<p>This role will be based in our San Francisco office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain data models and pipelines that scale with the growing number of services, products, and changes in the company.</li>
</ul>
<ul>
<li>Collaborate closely with Data Scientists, Data Analysts, and Business teams to understand their data needs, translating them into robust, efficient, scalable data solutions that enable ease of predictive analytics, data analysis, and metrics formulation.</li>
</ul>
<ul>
<li>Maintain data documentation and definitions, building and ensuring that source-of-truth tables remain high quality for data science and reporting applications.</li>
</ul>
<ul>
<li>Develop and enable integration with various data sources, allowing for more data-driven initiatives across the company.</li>
</ul>
<ul>
<li>Apply best practices in data management to ensure the reliability and robustness of data utilized across various analytics applications.</li>
</ul>
<ul>
<li>Set and proliferate company-wide standards for data relating to structure, quality, and expectations.</li>
</ul>
<ul>
<li>Act as a liaison between the technical and non-technical teams, bridging gaps and ensuring that data solutions align with business objectives.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3+ years of experience in Data Engineering, Data Analytics, or a related field such as Analytics Engineering.</li>
</ul>
<ul>
<li>2+ years of experience working with modern data transformation tools like DBT.</li>
</ul>
<ul>
<li>Advanced knowledge of databases and SQL with the ability to efficiently stage, process, and transform data.</li>
</ul>
<ul>
<li>Experience integrating and orchestrating data workflows with various modern data tools and systems.</li>
</ul>
<ul>
<li>Experience with data modeling, ETL/ELT processes, and data warehousing solutions.</li>
</ul>
<ul>
<li>Experience working with a data warehouse such as Snowflake.</li>
</ul>
<ul>
<li>Experience with a data workflow orchestrator tool such as Airflow.</li>
</ul>
<ul>
<li>Experience with a programming language such as Python.</li>
</ul>
<ul>
<li>Familiarity with BI tools such as Looker, Tableau, or similar platforms is a plus.</li>
</ul>
<ul>
<li>Exceptional quantitative and analytical skills.</li>
</ul>
<ul>
<li>Strong communication skills and ability to collaborate with various stakeholders, both technical and non-technical.</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $120,800 - $151,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$120,800 - $151,000</Salaryrange>
      <Skills>DBT, databases, SQL, data modeling, ETL/ELT processes, data warehousing solutions, Snowflake, Airflow, Python, BI tools, Looker, Tableau</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8366850002</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1d204fa1-067</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Data at Brex</p>
<p>Our Scientists and Engineers work together to make data , and insights derived from data , a core asset across Brex. But it&#39;s more than just crunching numbers. The Data team at Brex develops infrastructure, statistical models, and products using data. Our work is ingrained in Brex&#39;s decision-making process, the efficiency of our operations, our risk management policies, and the unparalleled experience we provide our customers.</p>
<p>What You’ll Do</p>
<p>As a Data Engineer at Brex, you will be a core contributor in transforming raw data into actionable insights for various departments across the organization. You&#39;ll collaborate closely with Data Scientists, Software Engineers, and business units to create efficient data models, pipelines, and analytics frameworks that drive the business forward. You also play a leading role in the design, implementation, and maintenance of Core Data tables, our high-quality, curated data source for a wide range of analytic applications.</p>
<p>Where you’ll work</p>
<p>This role will be based in our Seattle office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of two coordinated days in the office per week, Wednesday and Thursday. Starting February 2, 2026, we will require three days per week in office - Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and maintain data models and pipelines that scale with the growing number of services, products, and changes in the company.</li>
</ul>
<ul>
<li>Collaborate closely with Data Scientists, Data Analysts, and Business teams to understand their data needs, translating them into robust, efficient, scalable data solutions that enable ease of predictive analytics, data analysis, and metrics formulation.</li>
</ul>
<ul>
<li>Maintain data documentation and definitions, building and ensuring that source-of-truth tables remain high quality for data science and reporting applications.</li>
</ul>
<ul>
<li>Develop and enable integration with various data sources, allowing for more data-driven initiatives across the company.</li>
</ul>
<ul>
<li>Apply best practices in data management to ensure the reliability and robustness of data utilized across various analytics applications.</li>
</ul>
<ul>
<li>Set and proliferate company-wide standards for data relating to structure, quality, and expectations.</li>
</ul>
<ul>
<li>Act as a liaison between the technical and non-technical teams, bridging gaps and ensuring that data solutions align with business objectives.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3+ years of experience in Data Engineering, Data Analytics, or a related field such as Analytics Engineering.</li>
</ul>
<ul>
<li>2+ years of experience working with modern data transformation tools like DBT.</li>
</ul>
<ul>
<li>Advanced knowledge of databases and SQL with the ability to efficiently stage, process, and transform data.</li>
</ul>
<ul>
<li>Experience integrating and orchestrating data workflows with various modern data tools and systems.</li>
</ul>
<ul>
<li>Experience with data modeling, ETL/ELT processes, and data warehousing solutions.</li>
</ul>
<ul>
<li>Experience working with a data warehouse such as Snowflake.</li>
</ul>
<ul>
<li>Experience with a data workflow orchestrator tool such as Airflow.</li>
</ul>
<ul>
<li>Experience with a programming language such as Python.</li>
</ul>
<ul>
<li>Familiarity with BI tools such as Looker, Tableau, or similar platforms is a plus.</li>
</ul>
<ul>
<li>Exceptional quantitative and analytical skills.</li>
</ul>
<ul>
<li>Strong communication skills and ability to collaborate with various stakeholders, both technical and non-technical.</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $120,800 - $151,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$120,800 - $151,000</Salaryrange>
      <Skills>DBT, databases, SQL, data modeling, ETL/ELT processes, data warehousing solutions, Snowflake, Airflow, Python, BI tools, Looker, Tableau</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial technology company that provides corporate cards and banking services to businesses.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8510493002</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e00b7052-70b</externalid>
      <Title>Senior Business Systems Analyst, Finance Systems</Title>
      <Description><![CDATA[<p>We are seeking an experienced Senior Business Systems Analyst to join our Finance Systems team at Anthropic. In this role, you will serve as the internal functional lead for our Workday Financials implementation, owning the design and configuration of the Financial Data Model (FDM), Chart of Accounts, and dimensional structures that will serve as the source of truth for financial reporting.</p>
<p>You will develop Prism Analytics and Accounting Center solutions, gather requirements and build reporting capabilities, and collaborate closely with cross-functional teams to drive the successful adoption of our new ERP platform.</p>
<p>This is a critical role that will directly shape how Anthropic&#39;s finance organisation operates as we scale toward public company readiness. You will work at the intersection of finance domain expertise and technical implementation, partnering with the implementation partner, engineering teams, and finance stakeholders to build a world-class financial systems foundation.</p>
<p>Responsibilities:</p>
<ul>
<li>ERP Core Financials Implementation: Serve as internal functional lead for Workday Financials implementation, partnering with consultants to drive configuration decisions, validate designs, and ensure business requirements are met</li>
</ul>
<ul>
<li>Financial Data Model (FDM) Design: Own the design and configuration of Chart of Accounts, Worktags, dimensional hierarchies, and Accounting Books that will serve as the source of truth for all financial reporting, ensuring support for both GAAP and Management reporting requirements</li>
</ul>
<ul>
<li>Prism Analytics Development: Develop and maintain Prism/Accounting Center solutions from source analysis and ingestion design through build, testing, cutover, and hypercare, including integration with external data sources like BigQuery and Pigment</li>
</ul>
<ul>
<li>Requirements Gathering &amp; Reporting: Gather business requirements from Finance, Accounting, and FP&amp;A stakeholders, translating them into hands-on development of executive reporting, dashboards, and analytics solutions</li>
</ul>
<ul>
<li>Workshop Participation &amp; Solution Design: Participate in implementation workshops, challenge requirements, and translate business needs into buildable designs and testable acceptance criteria; manage defects and data quality issues throughout the project lifecycle</li>
</ul>
<ul>
<li>Cross-Functional Collaboration: Collaborate with Integrations, Security, and Financials configuration teams to align master data, journals, controls, and performance service level agreements; partner with Data Infrastructure and BizTech teams on system integrations</li>
</ul>
<ul>
<li>Cutover &amp; Hypercare Planning: Prepare cutover plans, data migration strategies, reconciliation frameworks, and hypercare plans; document data lineage, controls, and audit artifacts to support SOX compliance requirements</li>
</ul>
<ul>
<li>Platform Expansion &amp; Adoption: Work closely with engineering teams and business stakeholders to drive ongoing expansion and adoption of the Workday platform, identifying opportunities for process improvement and automation</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 8+ years of experience in finance systems, ERP implementation, or business systems analysis roles, with at least 5 years of hands-on Workday Financials experience</li>
</ul>
<ul>
<li>Possess deep expertise in Workday Financial Data Model (FDM), including Chart of Accounts design, Worktags configuration, dimensional hierarchies, and Accounting Books setup</li>
</ul>
<ul>
<li>Have strong experience with Workday Prism Analytics, including data modeling, source integration, calculated fields, and report development</li>
</ul>
<ul>
<li>Are skilled at translating complex business requirements into technical solutions, bridging the gap between finance stakeholders and technical implementation teams</li>
</ul>
<ul>
<li>Have experience with full ERP implementation lifecycles, including requirements gathering, configuration, testing, data migration, cutover planning, and hypercare</li>
</ul>
<ul>
<li>Possess strong understanding of financial accounting processes including General Ledger, multi-entity consolidation, intercompany accounting, and management reporting</li>
</ul>
<ul>
<li>Have excellent stakeholder management and communication skills, with ability to work effectively with finance leadership, accounting teams, and technical partners</li>
</ul>
<ul>
<li>Demonstrate strong analytical and problem-solving skills with attention to detail and commitment to data accuracy and integrity</li>
</ul>
<ul>
<li>Are comfortable working in fast-paced, high-growth environments with evolving requirements and tight timelines</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Background in accounting, finance, or CPA certification with understanding of GAAP/IFRS reporting requirements</li>
</ul>
<ul>
<li>Experience with Workday Accounting Center for complex journal automation and subledger accounting</li>
</ul>
<ul>
<li>Technical proficiency with SQL, Python, or scripting languages for data analysis and integration support</li>
</ul>
<ul>
<li>Experience integrating Workday with external data platforms such as BigQuery or cloud data warehouses</li>
</ul>
<ul>
<li>Knowledge of SOX compliance requirements and internal controls for financial systems</li>
</ul>
<ul>
<li>Experience with EPM/FP&amp;A systems such as Pigment, Anaplan, or Adaptive Planning and their integration with ERP</li>
</ul>
<ul>
<li>Prior experience at high-growth technology companies scaling toward IPO readiness</li>
</ul>
<ul>
<li>Familiarity with Workday HCM and understanding of HCM-Financials integration points</li>
</ul>
<ul>
<li>Experience with data migration tools, ETL processes, and reconciliation frameworks for ERP implementations</li>
</ul>
<p>The annual compensation range for this role is $205,000-$265,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>Workday Financials, Workday Financial Data Model (FDM), Chart of Accounts design, Worktags configuration, Dimensional hierarchies, Accounting Books setup, Prism Analytics, Data modeling, Source integration, Calculated fields, Report development, ERP implementation lifecycles, Requirements gathering, Configuration, Testing, Data migration, Cutover planning, Hypercare, Financial accounting processes, General Ledger, Multi-entity consolidation, Intercompany accounting, Management reporting, Stakeholder management, Communication skills, Analytical skills, Problem-solving skills, Data accuracy and integrity, SQL, Python, Scripting languages, BigQuery, Cloud data warehouses, SOX compliance requirements, Internal controls, EPM/FP&amp;A systems, Pigment, Anaplan, Adaptive Planning, ERP integration, High-growth technology companies, IPO readiness, Workday HCM, HCM-Financials integration points, Data migration tools, ETL processes, Reconciliation frameworks</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4991194008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>533ef325-495</externalid>
      <Title>Director Product Management, Banking + AP</Title>
      <Description><![CDATA[<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. As Director of Product Management, Banking + AP, you will define and drive the strategy for one of Brex&#39;s most critical domains: the Operating Account platform - the combination of Banking and Accounts Payable that makes Brex the place customers run payments from day one.</p>
<p>Brex&#39;s banking product is used by tens of thousands of businesses to store $10B+ in total deposits, and our AP platform moves over $1B across hundreds of thousands of transactions each month. This role will own the end-to-end product vision and execution across customer accounts, money movement, treasury capabilities, liquidity management, accounts payable workflows, and regulatory-compliant banking services.</p>
<p>As a member of the Product team, you will be at the forefront of Brex&#39;s mission to empower employees anywhere to make better financial decisions. With a deep understanding of the business, you will identify and scope out the most impactful opportunities for Brex to tackle. You will be responsible for aligning cross-functional teams , such as Engineering, Legal, Compliance, and Design , on key decisions.</p>
<p>Responsibilities:</p>
<ul>
<li>Define and drive Operating Account strategy - Own the multi-year product vision and roadmap for Brex&#39;s Operating Account platform spanning Banking and Accounts Payable , aligned to company-level priorities.</li>
<li>Identify and prioritize high-leverage opportunities that drive AP attach, deepen operating behavior, and grow durable DDA balances through payment activity rather than balance-seeking alone.</li>
<li>Identify customer challenges across increasing coordination and control complexity, from admin-initiated payments and centralized AP through decentralized, multi-stakeholder procurement workflows.</li>
<li>Balance innovation with regulatory rigor, ensuring long-term scalability and resilience.</li>
</ul>
<ul>
<li>Lead complex, regulated systems across Banking and AP - Oversee core banking and AP domains including accounts, ledger systems, payment rails (ACH, wires, RTP, international), bill pay, vendor management, treasury workflows, and embedded financial services.</li>
<li>Drive the integration of banking and AP capabilities into a unified operating experience, ensuring that money movement, vendor payments, and account management work seamlessly together.</li>
<li>Engage deeply with Engineering and Risk in discussions around APIs, ledger design, reconciliation, data models, and risk controls, to design reliable and scalable systems.</li>
</ul>
<ul>
<li>Build and lead a high-performing PM team - Manage and develop a team of Product Managers spanning banking and AP, fostering ownership, clarity, and high standards.</li>
<li>Elevate product craft across the organization, being at the forefront of how the role is changing as we incorporate agents - Establish strong operating rhythms for planning, prioritization, and cross-functional alignment.</li>
</ul>
<ul>
<li>Deliver measurable business impact - Define success metrics tied to company outcomes such as operating account attach rate, DDA balance growth, AP transaction volume, revenue expansion, margin improvement, customer retention, risk loss rates, and operational efficiency.</li>
<li>Drive structured decision-making using data, experimentation, and financial modeling , with a sharp focus on the relationship between AP behavior and balance durability.</li>
<li>Hold teams accountable to clear impact targets and measurable results.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>10+ years of product management experience, including leading and developing high-performing PM teams and building strong product cultures.</li>
<li>Proven track record of setting and executing multi-quarter or multi-year strategies that delivered measurable business impact (growth, retention, revenue expansion, cost efficiency, or adoption).</li>
<li>Strong business and systems thinking , able to translate complex operational or infrastructure challenges into simple customer experiences that drive adoption and long-term platform stickiness.</li>
<li>Data-driven decision maker with experience defining success metrics, modeling business impact, and using SQL or similar tools to inform priorities and tradeoffs.</li>
<li>Exceptional communicator who creates clarity in complex environments , distilling strategy, tradeoffs, and decisions into narratives that align executives and mobilize teams.</li>
<li>Experience building trusted, mission-critical products where reliability, accuracy, and customer confidence are non-negotiable.</li>
<li>High standards for product rigor and execution quality , consistently raising the bar for clarity of thinking, prioritization, and customer impact.</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience building or scaling accounts payable, payments, or money movement workflows, especially in B2B environments.</li>
<li>Experience with financial infrastructure, fintech platforms, or regulated products.</li>
<li>Familiarity with distributed systems, APIs, ledgers, payment rails, or financial data models.</li>
<li>Experience improving or automating procurement, bill pay, vendor management, or operational finance workflows.</li>
<li>Track record of driving deposit growth, balance-based revenue, or usage-driven platform adoption.</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $340,000 - $425,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$340,000 - $425,000</Salaryrange>
      <Skills>product management, banking, accounts payable, money movement, treasury capabilities, liquidity management, financial data models, APIs, ledger design, reconciliation, risk controls, SQL, data modeling, financial modeling, data analysis, communication, leadership, team management, product development, product launch, product growth, customer acquisition, customer retention, customer satisfaction, operational efficiency, cost reduction, revenue growth, margin improvement, fintech, regulated products, distributed systems, payment rails, vendor management, procurement, bill pay, operational finance, deposit growth, balance-based revenue, usage-driven platform adoption</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial technology company that provides a platform for businesses to manage their finances. It offers a range of products and services, including corporate cards, banking, and spend management.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8439927002</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7a6a5e65-740</externalid>
      <Title>Data Analyst III</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. By combining global corporate cards and banking with intuitive spend management, bill pay, and travel software, Brex enables founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>Tens of thousands of the world&#39;s best companies run on Brex, including DoorDash, Coinbase, Robinhood, Zoom, Plaid, Reddit, and SeatGeek.</p>
<p>Working at Brex allows you to push your limits, challenge the status quo, and collaborate with some of the brightest minds in the industry. We’re committed to building a diverse team and inclusive culture and believe your potential should only be limited by how big you can dream. We make this a reality by empowering you with the tools, resources, and support you need to grow your career.</p>
<p>Data at Brex</p>
<p>The Data organization develops insights, models, and data infrastructure for teams across Brex, including Sales, Marketing, Product, Engineering, and Operations. Our Data Scientists, Analysts, and Engineers work together to make data,and insights derived from data,a core asset across the company.</p>
<p>What you’ll do</p>
<p>As a senior Data Analyst (DA III), you will own the end-to-end analytics lifecycle for one or more business areas at Brex. You’ll go beyond building dashboards,you’ll frame the right questions, design rigorous analyses, apply statistical methods, and translate your findings into clear recommendations for leadership. You will also serve as a technical leader on the Data Analytics team, mentoring more junior analysts and helping define the standards and best practices that elevate the team’s work.</p>
<p>This role sits at the intersection of analytics, analytics engineering, and business strategy. You’ll work in a modern data stack environment and partner closely with Data Scientists, Data Engineers, and senior leaders across the organization.</p>
<p>Where you’ll work</p>
<p>This role will be based in our New York office. We are a hybrid environment that combines the energy and connections of being in the office with the benefits and flexibility of working from home. We currently require a minimum of three coordinated days in the office per week, Monday, Wednesday and Thursday. As a perk, we also have up to four weeks per year of fully remote work!</p>
<p>Responsibilities</p>
<ul>
<li>Own the analytics lifecycle for assigned business areas: from problem framing and data sourcing through analysis, insight generation, and stakeholder presentation.</li>
</ul>
<ul>
<li>Build and maintain dashboards and self-service reporting tools that enable business teams to independently track performance, identify risks, and make data-driven decisions.</li>
</ul>
<ul>
<li>Write production-quality SQL and Python code to extract, transform, and analyze data at scale.</li>
</ul>
<ul>
<li>Collaborate with Data Engineers and Data Scientists to develop and maintain analytical data models, improve data pipelines, and ensure data quality across the organization.</li>
</ul>
<ul>
<li>Partner with leadership across Sales, Operations, Product, Finance, and other departments to identify high-impact analytical opportunities and deliver actionable recommendations.</li>
</ul>
<ul>
<li>Mentor other data analysts and contribute to the development of team standards, documentation, code review practices, and analytical frameworks.</li>
</ul>
<ul>
<li>Proactively identify gaps in data infrastructure, propose improvements, and contribute to the evolution of the team’s tooling and processes.</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of experience in data analytics, business intelligence, or a related quantitative role.</li>
</ul>
<ul>
<li>3+ years of experience partnering directly with Sales, Operations, Product, or equivalent business teams as an embedded analytics partner.</li>
</ul>
<ul>
<li>Advanced SQL proficiency, including CTEs, window functions, performance optimization, and working across complex data models.</li>
</ul>
<ul>
<li>Proficiency in Python for data analysis, automation, and modeling (Pandas, NumPy, scikit-learn, or similar).</li>
</ul>
<ul>
<li>Experience with cloud data warehouses, particularly Snowflake (BigQuery and Databricks also valued).</li>
</ul>
<ul>
<li>Hands-on experience with BI and data visualization tools (Looker, Tableau, Hex, or similar).</li>
</ul>
<ul>
<li>Strong stakeholder management skills,proven ability to present complex technical findings to non-technical audiences.</li>
</ul>
<ul>
<li>Experience with generative AI and LLM-based tools (Claude Code, Cursor, GitHub Copilot) to perform and accelerate analyses, automated reporting, and build self-service data tools.</li>
</ul>
<p>Bonus points</p>
<ul>
<li>Demonstrated experience applying statistical methods to business problems (e.g., regression, classification, A/B testing).</li>
</ul>
<ul>
<li>Experience with dbt for data modeling and transformation.</li>
</ul>
<ul>
<li>Experience building and maintaining data pipelines using orchestration tools such as Airflow.</li>
</ul>
<ul>
<li>Experience working with APIs for data ingestion and integration.</li>
</ul>
<ul>
<li>Familiarity with version control systems (Git).</li>
</ul>
<ul>
<li>Experience in fintech, financial services, or payments.</li>
</ul>
<ul>
<li>Track record of leading cross-functional analytics projects from scoping through delivery.</li>
</ul>
<p>Compensation</p>
<p>The expected salary range for this role is $114,192 - $142,740. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$114,192 - $142,740</Salaryrange>
      <Skills>SQL, Python, Cloud data warehouses, BI and data visualization tools, Stakeholder management, Generative AI and LLM-based tools, Statistical methods, dbt for data modeling and transformation, Orchestration tools, APIs for data ingestion and integration, Version control systems, Fintech, financial services, or payments</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a finance platform that enables companies to spend smarter and move faster in over 200 markets. It combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8463704002</Applyto>
      <Location>New York, New York, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2a2686d2-290</externalid>
      <Title>Staff Analytics Engineer</Title>
      <Description><![CDATA[<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>
<p>Our Data Science and Analytics team seeks to empower R&amp;D to make data-backed decisions that accelerate innovation and improve product performance. You will work closely within our team and across Product &amp; Engineering to design and maintain a robust analytics data layer that enables trusted reporting on R&amp;D metrics.</p>
<p>In this role, you&#39;ll:</p>
<ul>
<li>Design and implement a formal analytics data layer using AWS Glue, Presto, and LookML</li>
<li>Collaborate within the Data Science &amp; Analytics team and across Product &amp; Engineering to define, document, and maintain alignment on metric definition and data lineage</li>
<li>Develop and maintain automated data reconciliation and quality checks to proactively identify and resolve discrepancies, ensuring accuracy and consistency of critical reports and dashboards</li>
<li>Lead investigations into complex data anomalies, conduct root cause analysis, and communicate findings and solutions effectively to both technical and non-technical audiences</li>
<li>Mentor and guide members of the data science and analytics team, establishing and enforcing best practices around data modeling, testing, documentation, and code review</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p>If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio. We are always looking for people who will bring something new to the table!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$155,520 - $194,400 (Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Vermont or Washington D.C.)
$164,640 - $205,800 (New York, New Jersey, Washington State, or California (outside of the San Francisco Bay area))
$182,960 - $228,700 (San Francisco Bay area, California)</Salaryrange>
      <Skills>AWS Glue, Presto, LookML, SQL, data modeling, data pipelines, data reconciliation, data quality checks, Python, distributed computing technologies, Hive, Spark, dashboarding tools, Looker, Tableau</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7551660</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>da823275-f35</externalid>
      <Title>Finance Reporting &amp; Analytics Manager</Title>
      <Description><![CDATA[<p>We are seeking a Finance Reporting &amp; Analytics Manager to join our Finance Transformation team and focus on delivering and advancing our reporting and analytics capabilities within the Finance organization.</p>
<p>The primary mission of the Finance Transformation team is to drive change and efficiency in Finance to support strategic objectives.</p>
<p>As the Finance Analytics Manager, you will be responsible for partnering with Finance stakeholders to scope and prioritize reporting and analytic needs, designing and delivering dashboards, reports, and analyses.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collecting, analyzing, and understanding business reporting requirements and translating them to data models and dashboards</li>
</ul>
<ul>
<li>Designing and building underlying data models to support end-user reporting</li>
</ul>
<ul>
<li>Evaluating and prioritizing new data analytics and insights project submissions, ensuring alignment with Finance org priorities</li>
</ul>
<ul>
<li>Enhancing and evolving Finance reporting capabilities and driving the adoption of analytics tools and aligned reporting frameworks</li>
</ul>
<ul>
<li>Creating detailed metadata documentation for data models and columns to improve data clarity and usability for end-users</li>
</ul>
<ul>
<li>Monitoring and maintaining data quality, validation/audits, and governance for in-scope data domains</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years of relevant work experience in Business Analytics, Data Modeling/Analytics, Data Management, or Program Management</li>
</ul>
<ul>
<li>Prior experience working across different Finance stakeholders, such as FP&amp;A, Revenue, Financial Reporting, etc.</li>
</ul>
<ul>
<li>Strong understanding of Finance reporting principles and best practices</li>
</ul>
<ul>
<li>Experience conducting large-scale data analysis to support business decision-making</li>
</ul>
<ul>
<li>Experience working with enterprise-wide data, structured and unstructured, joining large and complex data sets</li>
</ul>
<ul>
<li>Experience with SQL, dbt, Sigma Computing, and version control systems such as Git</li>
</ul>
<ul>
<li>Demonstrable ability to be agile, proactive, and comfortable working in ambiguity and think creatively to come up with innovative solutions to complex problems</li>
</ul>
<ul>
<li>Ability to translate functional (business) needs to analytics/technology</li>
</ul>
<ul>
<li>Familiarity with data structures from common Finance systems, such as NetSuite, Zuora Revpro</li>
</ul>
<ul>
<li>Self-motivated and pragmatic, with a get-it-done mentality</li>
</ul>
<p>As a distributed company, diversity drives our identity. Whether you&#39;re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn&#39;t matter if you&#39;re just out of college or your children are; we need you for what you can do.</p>
<p>We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>
<p>Benefits include competitive pay, health coverage for you and your family, flexible locations and schedules, generous vacation days, and more.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, dbt, Sigma Computing, version control systems, data modeling, data analysis, finance reporting, enterprise-wide data</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic, the Search AI Company</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform, used by more than 50% of the Fortune 500, brings together the precision of search and the intelligence of AI.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7657471</Applyto>
      <Location>Costa Rica</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21efddae-814</externalid>
      <Title>Responsable des programmes et des opérations commerciales (contrat de 16 mois)</Title>
      <Description><![CDATA[<p>Job Title: Responsable des programmes et des opérations commerciales (contrat de 16 mois)</p>
<p>Location: Canada</p>
<p>Department: Business Operations/Analysis</p>
<p>Job Description:</p>
<p>Airbnb was born in 2007 when two hosts welcomed three guests to their San Francisco home, and has since grown to over 5 million hosts who have welcomed over 2 billion guest arrivals in almost every country across the globe.</p>
<p>Every day, hosts offer unique stays and experiences that make it possible for guests to connect with communities in a more authentic way.</p>
<p>The community that awaits you:</p>
<p>Each day, Airbnb hosts offer unique stays and experiences that allow travelers to weave connections with communities in a more authentic way.</p>
<p>Since Airbnb&#39;s inception, over 5 million hosts have welcomed over 1.5 billion visitors in nearly every country around the world.</p>
<p>The organization&#39;s commercial leadership plays a central role in the company&#39;s growth and expansion efforts worldwide.</p>
<p>Our colleagues work to commercialize new and existing Airbnb businesses, enabling Airbnb to become more than just a hosting platform.</p>
<p>This includes recruiting and developing Airbnb&#39;s global offerings of stays, experiences, and services, as well as implementing Airbnb&#39;s global strategy in the market.</p>
<p>By focusing on commercial strategy, quality procurement, and international expansion, the team opens the door to continuous growth and success for Airbnb in its next chapter.</p>
<p>The regional operations team, part of the commercial operations division in Canada, ensures that Airbnb&#39;s activities are relevant at the local level and meet the needs of hosts and travelers worldwide by helping regions plan and implement growth strategies in areas such as supply, demand, and other sectors in a way that connects with the community.</p>
<p>To support the development of our activities, we want to recruit the best and brightest talents to make strategic business decisions.</p>
<p>To this end, the team is looking for a commercial operations manager to join our commercial operations team in Canada.</p>
<p>Your Contribution:</p>
<p>You will work closely with leaders and cross-functional teams and play a key role in developing our growth-focused programs and optimizingосредYour Contribution:</p>
<p>You will work closely with leaders and cross-functional teams and play a key role in developing our growth-focused programs and optimizing the community.</p>
<p>The successful candidate will report to the National Director in Canada.</p>
<p>Typical Day:</p>
<p>Align all functions to achieve objectives set with the Canadian leadership.</p>
<p>Organize regular commercial operations meetings and initiate constructive conversations, as well as organize weekly project meetings with cross-functional teams (field and online operations, marketing, products, legal services) to ensure that key initiatives are on track in Canada.</p>
<p>Study macroeconomic and microeconomic trends and present analysis, reports, and action plans to the Canadian national and regional Airbnb management team to enable them to make strategic business decisions.</p>
<p>Support the National Director in Canada and other senior leaders on key strategic projects, as well as lead and implement ad-hoc operational and strategic projects for Canada.</p>
<p>May need to coordinate activities with national, regional, and local teams in San Francisco.</p>
<p>Be responsible for designing, executing, and evaluating end-to-end cross-functional initiatives.</p>
<p>In concrete terms, this involves defining a precise problem statement, articulating the ideal solution to that problem, and identifying the main dependencies that hinder the solution, then working with cross-functional stakeholders to implement it.</p>
<p>These initiatives will cover areas such as demand, supply, and regulatory aspects of the business.</p>
<p>Your Expertise:</p>
<p>The ideal candidate has at least ten years of professional experience, preferably with a combination of exposure to data modeling, analysis, and project management in a dynamic entrepreneurial environment.</p>
<p>Excellent communication skills, both written and verbal, combined with the ability to deliver concise, clear, and compelling presentations at all levels of administration.</p>
<p>Proven experience in influencing and mobilizing various cross-functional stakeholders towards a common mission or objective using qualitative and quantitative information.</p>
<p>Ability to dive into details to drive execution, as well as take a step back to contextualize recommendations and initiatives within the company&#39;s overall global strategy.</p>
<p>Ability to perform multiple tasks by prioritizing work and coordinating necessary support between various functions to achieve project goals.</p>
<p>Impeccable organizational skills, great attention to detail, and the ability to document processes and decisions made.</p>
<p>Experience in implementing and developing initiatives within dispersed teams.</p>
<p>Experience working cross-functionally with multiple departments.</p>
<p>Practical approach to moving things forward quickly.</p>
<p>Ability to manipulate large datasets in Excel is essential. SQL mastery is highly preferred.</p>
<p>Your Location:</p>
<p>This position is located in Canada (telecommuting possible).</p>
<p>The position may involve working occasionally in an Airbnb office or participating in events outside the office (as agreed with your manager).</p>
<p>Telecommuting is possible, but you must live in an area where there is a registered Airbnb Canada Inc. entity, which is British Columbia, Alberta, Ontario, and Quebec.</p>
<p>Please contact us if you live in a province or territory that is not part of this list.</p>
<p>If you already work for another Airbnb entity, your recruiter will inform you in which provinces and territories you can work.</p>
<p>Our commitment to inclusion and belonging:</p>
<p>Airbnb is committed to working with people from the most diverse backgrounds possible.</p>
<p>We believe that diversity of ideas encourages innovation and engagement.</p>
<p>This approach helps us attract creative talent and develop the best products and services, as well as the best possible solutions.</p>
<p>We encourage anyone qualified to apply.</p>
<p>We also strive to provide an inclusive hiring and interview process for people with disabilities.</p>
<p>If you have a disability and need reasonable accommodations to submit your application, please contact us at reasonableaccommodations@airbnb.com.</p>
<p>Please indicate your full name, the position you are applying for, and the accommodations you need to help you through the recruitment process.</p>
<p>We ask that you only contact us about accommodations, not about the status of your application.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data modeling, analysis, project management, communication, influencing, mobilizing, qualitative information, quantitative information, Excel, SQL</Skills>
      <Category>Operations</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most well-known travel companies in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7743374</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e7491b84-e4f</externalid>
      <Title>Backend Engineer, Knowledge Graph (Rust)</Title>
      <Description><![CDATA[<p>As an Intermediate Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help build and operate a graph data service that supports GitLab Duo agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>
<p>You&#39;ll join a small, Rust-first team that values clear ownership, thoughtful system design, and rigorous thinking about data and reliability. The Knowledge Graph service is a Rust backend that builds a property graph from GitLab&#39;s software development lifecycle (SDLC) and code data. It uses ClickHouse, NATS JetStream, and the Data Insights Platform. It exposes secure graph queries and MCP tools used by AI agents and product features.</p>
<p>In this role, you&#39;ll deliver features and improvements in well-scoped areas, learn the broader architecture, and contribute to reliability, observability, and operational readiness. In your first year, you&#39;ll take clear ownership of specific components or features (for example, parts of the SDLC indexing pipeline or query paths). You&#39;ll help reduce single points of failure with better tests and runbooks, and you&#39;ll help the team ship analytical services that are easier to maintain and evolve over time.</p>
<p>Key responsibilities include:</p>
<p>Implementing and iterating on backend features in the Rust-based Knowledge Graph service, including changes to the query engine, SDLC and code indexing flows, and API endpoints (including MCP endpoints) under guidance from senior and staff engineers.</p>
<p>Helping maintain integrations between Knowledge Graph and the rest of the GitLab platform, working in areas that touch GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform.</p>
<p>Contributing to system design discussions by proposing options, raising questions, and documenting decisions, with a focus on reliability, scalability, and maintainability for analytical graph workloads.</p>
<p>Improving the operational maturity of the service by adding or enhancing metrics, logging, runbooks, alerts, and small readiness tasks, and by participating in on-call rotation as appropriate for your level and experience.</p>
<p>Collaborating asynchronously with product, data, infrastructure, security, and AI counterparts to clarify requirements, align on scope, and ship features safely for customers and sustainably for the team.</p>
<p>Using AI-assisted development workflows responsibly (for example, using Knowledge Graph-backed agents and internal Duo tooling), and sharing what works with the team while keeping a strong focus on code quality and correctness.</p>
<p>Participating in code reviews, knowledge-sharing sessions, and pairing to both learn from others and help maintain consistent standards across the codebase.</p>
<p>Contribute across the stack when needed, including occasional Ruby work for Rails integration and authorization paths, or small frontend changes related to Knowledge Graph features (for example, Software Architecture Map UI plumbing).</p>
<p>What you&#39;ll bring:</p>
<p>Professional experience building and maintaining backend systems in production, with an understanding of reliability, maintainability, and how to support services over time (incident responses, and follow-ups, etc).</p>
<p>Proficiency in at least one modern backend language and strong interest in Rust, with either prior Rust experience or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive codebase.</p>
<p>Some exposure to distributed data or analytics systems (for example, OLAP databases, Kafka- or NATS-style messaging, or change data capture (CDC) pipelines), or strong motivation to develop those skills in this role.</p>
<p>Interest in graph data modeling and query patterns (property graphs, multi-step (n-hop) traversals, aggregations), and willingness to learn the tools and concepts used in Knowledge Graph over time.</p>
<p>Practical experience (or strong interest) using AI tools in day-to-day development, along with a thoughtful approach to validating outputs and integrating AI into your workflow.</p>
<p>A language-agnostic mindset and evidence that you can pick up new languages and frameworks as needed (for example, Ruby, Go, or TypeScript/Vue where the work touches adjacent systems).</p>
<p>Solid fundamentals in system design for your level, including the ability to reason about trade-offs, ask good questions, and align your implementation work with documented architectural decisions.</p>
<p>Comfort working in a low-process, high-ownership environment where you take responsibility for your work, communicate progress clearly, and help refine problem statements with your teammates.</p>
<p>Strong written communication and comfort collaborating asynchronously across time zones in an all-remote team.</p>
<p>About the team:</p>
<p>We sit within the Data Engineering organization. We&#39;re a small group of senior engineers and we work closely with partners across AI (Duo Agent Platform), analytics, infrastructure and delivery, and security because our work spans many parts of the platform. We collaborate asynchronously and optimize for strong ownership rather than a feature factory model. We each build a meaningful understanding of the system and help evolve it over time. A key challenge for us right now is scaling sustainably. That includes hardening multi-tenant behavior, maturing observability and readiness, and keeping the system healthy and maintainable as usage grows and team members take time off. At the same time, we&#39;re bringing Knowledge Graph to general availability (GA).</p>
<p>How GitLab Supports Full-Time Employees:</p>
<p>Benefits to support your health, finances, and well-being Flexible Paid Time Off Team Member Resource Groups Equity Compensation &amp; Employee Stock Purchase Plan Growth and Development Fund Parental leave Home office support</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>intermediate</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, backend systems, distributed data, analytics systems, graph data modeling, query patterns, AI tools, system design, low-process, high-ownership environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides a suite of tools for version control, collaboration, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8481958002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ccde3b0-9f8</externalid>
      <Title>Director Product Management, Banking + AP</Title>
      <Description><![CDATA[<p>Join Brex, the intelligent finance platform that empowers companies to spend smarter and move faster in more than 200 markets.</p>
<p>As Director of Product Management, Banking + AP, you will define and drive the strategy for one of Brex&#39;s most critical domains: the Operating Account platform , the combination of Banking and Accounts Payable that makes Brex the place customers run payments from day one.</p>
<p>Brex&#39;s banking product is used by tens of thousands of businesses to store $10B+ in total deposits, and our AP platform moves over $1B across hundreds of thousands of transactions each month.</p>
<p>This role will own the end-to-end product vision and execution across customer accounts, money movement, treasury capabilities, liquidity management, accounts payable workflows, and regulatory-compliant banking services.</p>
<p>Responsibilities:</p>
<ul>
<li>Define and drive Operating Account strategy - Own the multi-year product vision and roadmap for Brex&#39;s Operating Account platform spanning Banking and Accounts Payable , aligned to company-level priorities.</li>
<li>Identify and prioritize high-leverage opportunities that drive AP attach, deepen operating behavior, and grow durable DDA balances through payment activity rather than balance-seeking alone.</li>
<li>Identify customer challenges across increasing coordination and control complexity, from admin-initiated payments and centralized AP through decentralized, multi-stakeholder procurement workflows.</li>
<li>Balance innovation with regulatory rigor, ensuring long-term scalability and resilience.</li>
</ul>
<p>Lead complex, regulated systems across Banking and AP:</p>
<ul>
<li>Oversee core banking and AP domains including accounts, ledger systems, payment rails (ACH, wires, RTP, international), bill pay, vendor management, treasury workflows, and embedded financial services.</li>
<li>Drive the integration of banking and AP capabilities into a unified operating experience, ensuring that money movement, vendor payments, and account management work seamlessly together.</li>
<li>Engage deeply with Engineering and Risk in discussions around APIs, ledger design, reconciliation, data models, and risk controls, to design reliable and scalable systems.</li>
</ul>
<p>Build and lead a high-performing PM team:</p>
<ul>
<li>Manage and develop a team of Product Managers spanning banking and AP, fostering ownership, clarity, and high standards.</li>
<li>Elevate product craft across the organization, being at the forefront of how the role is changing as we incorporate agents - Establish strong operating rhythms for planning, prioritization, and cross-functional alignment.</li>
</ul>
<p>Deliver measurable business impact:</p>
<ul>
<li>Define success metrics tied to company outcomes such as operating account attach rate, DDA balance growth, AP transaction volume, revenue expansion, margin improvement, customer retention, risk loss rates, and operational efficiency.</li>
<li>Drive structured decision-making using data, experimentation, and financial modeling , with a sharp focus on the relationship between AP behavior and balance durability.</li>
<li>Hold teams accountable to clear impact targets and measurable results.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>10+ years of product management experience, including leading and developing high-performing PM teams and building strong product cultures.</li>
<li>Proven track record of setting and executing multi-quarter or multi-year strategies that delivered measurable business impact (growth, retention, revenue expansion, cost efficiency, or adoption).</li>
<li>Strong business and systems thinking , able to translate complex operational or infrastructure challenges into simple customer experiences that drive adoption and long-term platform stickiness.</li>
<li>Data-driven decision maker with experience defining success metrics, modeling business impact, and using SQL or similar tools to inform priorities and tradeoffs.</li>
<li>Exceptional communicator who creates clarity in complex environments , distilling strategy, tradeoffs, and decisions into narratives that align executives and mobilize teams.</li>
<li>Experience building trusted, mission-critical products where reliability, accuracy, and customer confidence are non-negotiable.</li>
<li>High standards for product rigor and execution quality , consistently raising the bar for clarity of thinking, prioritization, and customer impact.</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience building or scaling accounts payable, payments, or money movement workflows, especially in B2B environments.</li>
<li>Experience with financial infrastructure, fintech platforms, or regulated products.</li>
<li>Familiarity with distributed systems, APIs, ledgers, payment rails, or financial data models.</li>
<li>Experience improving or automating procurement, bill pay, vendor management, or operational finance workflows.</li>
<li>Track record of driving deposit growth, balance-based revenue, or usage-driven platform adoption.</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $340,000 - $425,000. However, the starting base pay will depend on a number of factors including the candidate’s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$340,000 - $425,000</Salaryrange>
      <Skills>Product Management, Banking, Accounts Payable, Financial Infrastructure, Fintech Platforms, Regulated Products, Distributed Systems, APIs, Ledgers, Payment Rails, Financial Data Models, SQL, Data Modeling, Risk Controls, Vendor Management, Treasury Workflows, Embedded Financial Services</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8439929002</Applyto>
      <Location>Vancouver, British Columbia, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a95e0984-1cb</externalid>
      <Title>Senior Business Solutions Engineer, BizTech</Title>
      <Description><![CDATA[<p><strong>Job Title</strong></p>
<p>Senior Business Solutions Engineer, BizTech</p>
<p><strong>Job Description</strong></p>
<p>We are seeking a world-class Senior Business Systems Engineer to join our dynamic team. As a Senior Business Solutions Engineer, you will be responsible for providing quick solutions utilizing internal tools and agentic AI to support and assist teams in enhancing their day-to-day workflows. You will oversee the end-to-end delivery of system and tool changes as required, and provide support to those solutions.</p>
<p>Your expertise will help identify current pain points for stakeholders, promote industry best practices, and maintain data integrity and accuracy within our platforms. You will also play a crucial role in mentoring junior team members, driving strategy, and delivering results with excellence.</p>
<p>A high degree of focus on practical AI and understanding the current state of industry as AI technology changes is critical.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Product Roadmap Ownership</strong></p>
<p>Lead the design, development, implementation, and optimization of custom solutions to improve internal teams&#39; processes and workflows, with a focus on Agentic AI solutions utilizing Airbnb-approved internal tools.</p>
<p><strong>Stakeholder Management</strong></p>
<p>Identify and address current pain points, socialize industry best practices, and prioritize project pipelines in collaboration with stakeholders.</p>
<p><strong>Automation</strong></p>
<p>Identify and drive automation initiatives to reduce manual processes and improve operational efficiency across core business functions.</p>
<p><strong>Vendor Management</strong></p>
<p>Partner with our vendors to drive and own the product feature and bug lifecycle from requirements gathering to launch.</p>
<p><strong>Industry Trends</strong></p>
<p>Stay updated with the latest platform features, best practices, and industry trends to proactively identify opportunities for enhancements.</p>
<p><strong>Business Process Improvement</strong></p>
<p>Craft and suggest future-state business processes, and lead the implementation of these suggestions.</p>
<p><strong>Expertise and Mentorship</strong></p>
<p>Act as a company-wide expert in relevant functional tools and applications. Mentor junior team members and provide guidance on development and delivery.</p>
<p><strong>Architecture and Design Reviews</strong></p>
<p>Collaborate on architecture and design reviews, offering necessary guidance to ensure development guidelines are followed.</p>
<p><strong>Requirements</strong></p>
<p><strong>Experience</strong></p>
<p>8+ years of professional experience with proven enterprise-grade systems implementations, IT consulting, or similar field leveraging advanced platform integrations Minimum of 2 years of experience demonstrably using AI in workflows and to support customers Experience researching and implementing custom applications from the Marketplace from research, to design and implementation Advanced degree (e.g., MS/MBA) in Business, Computer Science, or related fields preferred Demonstrated experience designing and implementing custom business solutions and integrations using enterprise platforms Deep knowledge of data modeling, business intelligence tools (e.g., Tableau), and enterprise reporting best practices Skilled at stakeholder engagement and cross-functional collaboration in large global organizations Adept at leading complex projects, simplifying technical concepts, and promoting operational excellence Excellent written and verbal communication skills, especially in critiquing technical designs and leading workshops, and engaging with senior leadership Exemplifies strong leadership fundamentals, guides junior team members, and drives strategy with a focus on long-term scalability Proficient in designing and implementing data management strategies to ensure data integrity Identifies opportunities for product and process improvements and can lead and implement said improvements, obtaining necessary buy-in from stakeholders and leadership</p>
<p><strong>Preferred Qualifications</strong></p>
<p>Advanced degree (e.g., MS/MBA) in Business, Computer Science, or related fields</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise-grade systems implementations, IT consulting, Agentic AI, Data modeling, Business intelligence tools, Enterprise reporting best practices, Stakeholder engagement, Cross-functional collaboration, Leadership, Data management strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global platform that connects hosts with guests, offering unique stays and experiences in almost every country across the globe. With over 5 million hosts and 2 billion guest arrivals, Airbnb has grown significantly since its inception in 2007.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7716311</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7dee17b8-923</externalid>
      <Title>Data Science Engineer, Capacity &amp; Efficiency</Title>
      <Description><![CDATA[<p>As a Data Science Engineer, Capacity &amp; Efficiency, you will play a critical role in Anthropic&#39;s mission of building safe and beneficial AI by ensuring we understand, optimize, and strategically manage our cloud infrastructure spend.</p>
<p>You will work closely with Compute Finance, Infrastructure Engineers, and Product to translate raw cloud billing data into actionable efficiency insights and influence capacity planning &amp; allocation. You will help build deep visibility into our infrastructure spend, forecast capacity needs, attribute costs accurately across teams and workloads, model resource demand curves, and help identify efficiency opportunities across our fleet.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build and maintain cloud cost attribution models that accurately allocate infrastructure spend (compute, accelerators, storage, networking, data transfer) across teams, products, and workloads, providing clear visibility into who is spending what and why.</li>
<li>Partner with infrastructure, finance, and procurement stakeholders to analyze utilization patterns, identify inefficiencies, and drive optimization initiatives that improve the cost-effectiveness of our non-accelerator cloud resources.</li>
<li>Develop forecasting models for non-accelerator infrastructure demand, incorporating business growth projections, product roadmaps, and historical spend trends to enable proactive capacity planning and budget accuracy.</li>
<li>Define and track unit cost metrics (e.g., cost per request, cost per GB stored, cost per pipeline run) and identify opportunities to reduce them, influencing infrastructure and engineering roadmaps with data-driven recommendations.</li>
<li>Develop unit cost economics for various workloads and applications, and using the metrics to drive efficiency efforts across product and infrastructure teams.</li>
<li>Build a cost-aware culture across the organization by creating self-serve dashboards, automated reporting, and accessible datasets that give engineering and finance teams clear visibility into cloud spend and efficiency metrics.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>6+ years of experience in data science, analytics, or FinOps roles, with a focus on cloud infrastructure cost analysis, capacity planning, or efficiency optimization.</li>
<li>Experience building spend forecasting models and large-scale cost attribution systems.</li>
<li>Deep knowledge of cloud billing systems, cost allocation methodologies, and spend optimization levers (e.g., reserved instances, committed use discounts, rightsizing, spot/preemptible usage).</li>
<li>Expertise in Python, SQL, forecasting, data modeling and data visualization tools.</li>
<li>Strong communication and presentation skills.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive compensation and benefits package.</li>
<li>Optional equity donation matching.</li>
<li>Generous vacation and parental leave.</li>
<li>Flexible working hours.</li>
<li>Lovely office space in San Francisco.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$275,000-$370,000 USD</Salaryrange>
      <Skills>cloud infrastructure cost analysis, capacity planning, efficiency optimization, Python, SQL, forecasting, data modeling, data visualization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5125881008</Applyto>
      <Location>New York City, NY; San Francisco, CA | New York City, NY; Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2f60560e-b87</externalid>
      <Title>Applied Math Tutor</Title>
      <Description><![CDATA[<p>As an AI Tutor - Applied Math Specialist, you&#39;ll play a key role in advancing xAI&#39;s mission by enhancing our AI technologies through high-quality inputs, labels, and annotations using specialized software.</p>
<p>You&#39;ll collaborate with our technical team to train models on human interactions, problem-solving, and discussions; refine annotation tools; and select/create challenging applied math problems to boost performance.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software applications to provide input/labels on defined projects.</li>
<li>Support and ensure the delivery of high-quality curated data.</li>
<li>Play a pivotal role in supporting and contributing to the training of new tasks, working closely with the technical staff to ensure the successful development and implementation of cutting-edge initiatives/technologies.</li>
<li>Interact with the technical staff to help improve the design of efficient annotation tools.</li>
<li>Choose problems from applied math domains that align with your expertise, focusing on areas such as probability, statistics, numerical analysis, optimization, operations research, dynamical systems, data modeling, and related fields where you can confidently provide detailed solutions and evaluate model responses.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Must have either (a) a Master’s or PhD in Mathematics with a specialization in a subdomain of applied mathematics (there is a separate position for pure math) or (b) a medal in an International Math Olympiad (IMO) or similar level math competition.</li>
<li>Proficiency in reading and writing, both in informal and professional English.</li>
<li>Strong ability to navigate various information resources and databases.</li>
<li>Outstanding communication, interpersonal, analytical, and organizational capabilities.</li>
<li>Solid reading comprehension skills combined with capacity to exercise autonomous judgment even when presented with limited data/material.</li>
<li>A strong passion for and commitment to technological advancements and innovation.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Math PhD with a specialization in a subdomain of applied mathematics.</li>
<li>Peer-reviewed applied math publications in well-regarded journals.</li>
<li>Previous AI Tutoring experience and/or experience teaching college courses.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average most projects may involve at least 10 hours per week to achieve deliverables effectively though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role specific needs.</li>
<li>For US based candidates, please note we are unable to hire in the states of Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US based candidates: $45/hour - $75/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location and jurisdiction. Benefits for eligible U.S. based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$45/hour - $75/hour</Salaryrange>
      <Skills>applied mathematics, probability, statistics, numerical analysis, optimization, operations research, dynamical systems, data modeling, math PhD, peer-reviewed publications, AI tutoring experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4925878007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6acd8036-5ec</externalid>
      <Title>Platform Engineer (Databases &amp; Storage)</Title>
      <Description><![CDATA[<p>We are looking for a Staff Platform Engineer to own the database and storage foundation of World Labs. This is a high-impact systems role at the intersection of databases, distributed systems, and AI infrastructure. You will define how core data systems are designed, scaled, and operated in an environment where workloads are evolving quickly and requirements are often ambiguous.</p>
<p>Your responsibilities will include owning the design and evolution of the transactional systems that power the platform, defining architecture for database and storage systems under high-throughput, low-latency workloads, making and driving decisions around data modeling, indexing, replication, and consistency, debugging and resolving complex production issues, establishing standards for reliability, observability, and operability across the platform, partnering with product and research teams to support evolving and often ambiguous requirements, driving improvements in performance, scalability, and cost across the system, mentoring engineers and raising the bar for system design and technical decision-making.</p>
<p>Key qualifications include 10+ years of experience building and operating production systems at scale, with ownership of critical infrastructure, strong experience designing and operating transactional systems and databases, deep understanding of data modeling, indexing, transactions, concurrency, and consistency tradeoffs, experience owning systems with strict reliability and performance requirements in production, strong experience debugging complex production issues and reasoning about failure modes, experience designing distributed systems or large-scale infrastructure where tradeoffs are non-trivial, proven ability to define architecture and drive technical decisions end-to-end, strong judgment in balancing performance, reliability, and cost, ability to operate effectively in ambiguous, fast-moving environments with high ownership.</p>
<p>Preferred qualifications include experience with database internals, storage systems, or query engines, experience building infrastructure for AI/ML systems or data platforms, experience in early-stage or high-growth environments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$200-$300k base salary (good-faith estimate for San Francisco Bay Area upon hire; actual offer based on experience, skills, and qualifications)</Salaryrange>
      <Skills>database internals, storage systems, query engines, data modeling, indexing, transactions, concurrency, consistency, distributed systems, large-scale infrastructure, AI/ML systems, data platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>World Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/worldlabs.ai.png</Employerlogo>
      <Employerdescription>World Labs builds foundational world models that can perceive, generate, reason, and interact with the 3D world.</Employerdescription>
      <Employerwebsite>https://www.worldlabs.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/worldlabs/jobs/4194381009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ceba9e5b-250</externalid>
      <Title>Senior Backend Engineer, Product and Infra</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Backend Engineer to build the systems and services that power our product experience. You&#39;ll own the backend infrastructure that makes our content discoverable, our features responsive, and our platform reliable at scale.</p>
<p>Your work will directly shape what users experience: designing APIs that serve rich content, building services that handle real-time interactions, implementing content-matching systems for rights and safety, and ensuring our platform performs under load. You&#39;ll architect systems that are fast, correct, and maintainable.</p>
<p>You&#39;ll collaborate closely with Product, ML Research, and Mobile/Web teams to ship features that matter. We use Python, Go, BigQuery, Pub/Sub, and a microservices architecture,but we care more about good judgment than specific tool experience.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and maintain application-level data models that organize rich content into canonical structures optimized for product features, search, and retrieval.</li>
<li>Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.</li>
<li>Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.</li>
<li>Implement and refine fingerprinting pipelines used for deduplication, rights attribution, safety checks, and provenance validation.</li>
<li>Own data consistency between ingestion systems, application surfaces, metadata storage, and downstream reporting environments.</li>
<li>Define and track key operational metrics, including latency, completeness, accuracy, and event health.</li>
<li>Collaborate with Product teams to ensure content structures and APIs support evolving features and high-quality user experiences.</li>
<li>Partner with Analytics and Research teams to deliver clean usage datasets for experimentation, model evaluation, reporting, and internal insights.</li>
<li>Operate large analytical workloads in BigQuery and build reusable Dataflow/Beam components for structured processing.</li>
<li>Improve reliability and scale by designing robust schema evolution strategies, idempotent pipelines, and well-instrumented operational flows.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building production backend services and APIs at scale</li>
<li>Experience building ETL/ELT pipelines, event processing systems, and structured data models for applications or analytics</li>
<li>Strong background in data modeling, metadata systems, indexing, or building canonical representations for heterogeneous content</li>
<li>Proficiency in Python, Go, SQL, and scalable data-processing frameworks (Dataflow/Beam, Spark, or similar)</li>
<li>Familiarity with BigQuery or other analytical data warehouses and strong comfort optimizing large queries and schemas</li>
<li>Experience with event-driven architectures, Pub/Sub, or Kafka-like systems</li>
<li>Strong understanding of data quality, schema evolution, lineage, and operational reliability</li>
<li>Ability to design pipelines that balance cost, latency, correctness, and scale</li>
<li>Clear communication skills and an ability to collaborate closely with Product, Research, and Analytics stakeholders</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience building application-facing APIs or microservices that expose structured content</li>
<li>Background in information retrieval, indexing systems, or search infrastructure</li>
<li>Experience with fingerprinting, perceptual hashing, audio similarity metrics, or content-matching algorithms</li>
<li>Familiarity with ML workflows and how downstream analytics and usage data feed back into research pipelines</li>
<li>Understanding of batch + streaming architectures and how to blend them effectively</li>
<li>Experience with Go, Next.js, or React Native for occasional full-stack contributions</li>
</ul>
<p><strong>Why Join Us</strong></p>
<p>You will design the core data services and pipelines that power our product experience, analytics, and business operations. You’ll work on high-impact data challenges involving real-time signals, large-scale metadata systems, and cross-platform consistency. You’ll join a small, fast-moving team where you’ll shape the structure, reliability, and intelligence of our downstream data ecosystem.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Highly competitive salary and equity</li>
<li>Quarterly productivity budget</li>
<li>Flexible time off</li>
<li>Fantastic office location in Manhattan</li>
<li>Productivity package, including ChatGPT Plus, Claude Code, and Copilot</li>
<li>Top-notch private health, dental, and vision insurance for you and your dependents</li>
<li>401(k) plan options with employer matching</li>
<li>Concierge medical/primary care through One Medical and Rightway</li>
<li>Mental health support from Spring Health</li>
<li>Personalized life insurance, travel assistance, and many other perks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $220,000</Salaryrange>
      <Skills>Python, Go, BigQuery, Pub/Sub, Data modeling, Metadata systems, Indexing, Canonical representations, ETL/ELT pipelines, Event processing systems, Structured data models, Scalable data-processing frameworks, Analytical data warehouses, Event-driven architectures, Kafka-like systems, Data quality, Schema evolution, Lineage, Operational reliability, Application-facing APIs, Microservices, Information retrieval, Indexing systems, Search infrastructure, Fingerprinting, Perceptual hashing, Audio similarity metrics, Content-matching algorithms, ML workflows, Batch + streaming architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Udio</Employername>
      <Employerlogo>https://logos.yubhub.co/udio.com.png</Employerlogo>
      <Employerdescription>Udio is a technology company that powers product experiences.</Employerdescription>
      <Employerwebsite>https://www.udio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/udio/jobs/4987729008</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2dd69016-e4d</externalid>
      <Title>Senior Engineer, Foundry Applications</Title>
      <Description><![CDATA[<p>We&#39;re growing quickly and looking for a Full-Stack Software Engineer to help design, develop, and deploy robust applications that directly enable manufacturing, operational planning, and execution , all within a Foundry-integrated ecosystem.</p>
<p>As a Senior Engineer, Foundry Applications, you will execute the development and release of Foundry-backed applications that power MES, ERP, and operational analytics. You will develop end-to-end features: From backend services and data transformations to user-facing interfaces , ensuring performance, maintainability, and security.</p>
<p>Collaborate with production managers, engineers, and supply chain teams to understand workflows and deliver scalable software solutions. Leverage Palantir Foundry: Use Foundry&#39;s APIs, ontology, and visualization tools to integrate data across multiple domains. Deliver continuously: Write well-tested, maintainable code and iterate quickly based on real-world feedback.</p>
<p>Shape the roadmap: Contribute to technical direction, make sound architecture decisions, and help scale our Foundry development capabilities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$125,000 - $175,000 a year</Salaryrange>
      <Skills>full-stack application development, Python data-processing pipelines, data modeling, software design, testing, Palantir Foundry, Typescript, modern front-end frameworks, software for manufacturing, modern software delivery practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/c2479afc-d804-4b2a-a3b8-0b111f785e8e</Applyto>
      <Location>Dallas</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5c694170-8a7</externalid>
      <Title>Senior Engineer, Full-Stack Software - Foundry Applications</Title>
      <Description><![CDATA[<p>The Applications team within Software Integration &amp; Operations at Shield AI builds internal tools powering critical operations across Production, Customer Success, Supply Chain, and Engineering. We&#39;re looking for a Full-Stack Software Engineer to design, develop, and deploy robust applications enabling manufacturing, operational planning, and execution within a Foundry-integrated ecosystem.</p>
<p>Key responsibilities include executing the development and release of Foundry-backed applications, developing end-to-end features, collaborating with production managers and supply chain teams, leveraging Palantir Foundry, and delivering continuously.</p>
<p>Required qualifications include 5+ years of experience in full-stack application development, hands-on experience designing and building React applications and TypeScript/Javascript-based backends, strong skills in data modeling, and solid foundation in software design, testing, and documentation.</p>
<p>Preferred qualifications include familiarity with Palantir Foundry, hands-on experience designing and operating Python data-processing pipelines, and experience building software for manufacturing, supply chain, or aircraft maintenance.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$120,000 - $190,000 a year</Salaryrange>
      <Skills>full-stack application development, React application development, TypeScript/Javascript-based backend development, data modeling, software design, testing, documentation, Palantir Foundry, Python data-processing pipelines, manufacturing software development, supply chain software development, aircraft maintenance software development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/12901f0e-56f1-4654-939d-1605d5b10e75</Applyto>
      <Location>Dallas</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>eb6eaa25-590</externalid>
      <Title>Salesforce Developer</Title>
      <Description><![CDATA[<p>Job Overview</p>
<p>We are seeking a skilled Salesforce Developer to design, develop, and maintain scalable solutions within the Salesforce platform. This role will support business processes across sales, service, and operations while ensuring alignment with enterprise architecture and integration strategies.</p>
<p>Responsibilities</p>
<ul>
<li>Design, develop, and implement Salesforce solutions using Apex, Lightning, and related technologies</li>
<li>Customize Salesforce objects, workflows, and automation</li>
<li>Develop and maintain integrations with external systems such as NetSuite and MuleSoft</li>
<li>Collaborate with business analysts and architects to translate requirements</li>
<li>Ensure code quality through best practices and testing</li>
<li>Support deployment, release management, and CI/CD processes</li>
<li>Troubleshoot and resolve system issues</li>
<li>Maintain documentation for solutions and enhancements</li>
<li>Ensure platform performance, scalability, and security</li>
</ul>
<p>Qualifications</p>
<ul>
<li>5 to 8 plus years of Salesforce development experience</li>
<li>Strong experience with Apex, Lightning Web Components, and Visualforce</li>
<li>Experience with Salesforce Sales Cloud and Service Cloud</li>
<li>Experience with integrations using APIs and middleware</li>
<li>Familiarity with DevOps tools and deployment pipelines</li>
<li>Strong understanding of data modeling and platform limits</li>
<li>Salesforce certifications preferred</li>
<li>Bachelor’s degree in Computer Science or related field</li>
</ul>
<p>Physical Demands</p>
<ul>
<li>Prolonged periods of sitting at a desk and working on a computer.</li>
<li>Occasional standing and walking within the office.</li>
<li>Manual dexterity to operate a computer keyboard, mouse, and other office equipment.</li>
<li>Visual acuity to read screens, documents, and reports.</li>
<li>Occasional reaching, bending, or stooping to access file drawers, cabinets, or office supplies.</li>
<li>Lifting and carrying items up to 20 pounds occasionally (e.g., office supplies, packages).</li>
</ul>
<p>Additional Information</p>
<ul>
<li>Benefits: Medical Insurance, Dental and Vision Insurance, Time Off, Parental Leave, Competitive Salary, Retirement Plan, Stock Options, Life and Disability Insurance, Pet Insurance</li>
<li>Saronic CCPA Notice for Candidates and California Employees: This role requires access to export-controlled information or items that require “U.S. Person” status.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Salesforce, Apex, Lightning, NetSuite, MuleSoft, DevOps, Data Modeling, Platform Limits</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions for maritime operations through autonomous and intelligent platforms.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/a0df621a-b4e3-4f2c-8ad1-da14f8790b4d</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2289a325-de7</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Software Engineer to join our team. As a Software Engineer at Saronic Technologies, you will maintain and build the Foundry platform alongside Palantir specialists. You will collaborate with internal stakeholders, collect requirements, and write code that results in secure, timely, accurate, and reliable data models in Palantir Foundry.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Maintain and build the Foundry platform alongside Palantir specialists.</li>
<li>Collaborate with internal stakeholders.</li>
<li>Collect requirements and write code that results in secure, timely, accurate, and reliable data models in Palantir Foundry.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree.</li>
<li>1+ years of experience in Information Technology.</li>
<li>1-2 years of experience with hands-on client-facing development and data integration.</li>
<li>Experience with Python and SQL.</li>
<li>Experience with data modeling, ETL, and data visualization.</li>
<li>Experience with Git-based code repositories and CI/CD workflows.</li>
<li>Ability to lead fast-paced, Agile environments.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Medical Insurance: Comprehensive health insurance plans covering a range of services.</li>
<li>Dental and Vision Insurance: Coverage for routine dental check-ups, orthodontics, and vision care.</li>
<li>Time Off: Generous PTO and Holidays.</li>
<li>Parental Leave: Paid maternity and paternity leave to support new parents.</li>
<li>Competitive Salary: Industry-standard salaries with opportunities for performance-based bonuses.</li>
<li>Retirement Plan: 401(k) plan.</li>
<li>Stock Options: Equity options to give employees a stake in the company&#39;s success.</li>
<li>Life and Disability Insurance: Basic life insurance and short- and long-term disability coverage.</li>
<li>Additional Perks: Free lunch benefit and unlimited free drinks and snacks in the office.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, data modeling, ETL, data visualization, Git, CI/CD workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions for maritime operations through autonomous and intelligent platforms.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/febf9919-3e0a-46ab-97aa-c53256e18bf7</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>0549f86b-cdd</externalid>
      <Title>Principal Software Engineer, Backend Systems</Title>
      <Description><![CDATA[<p>Role</p>
<p>We&#39;re on a mission to redefine enterprise software, and we&#39;re looking for a Principal Software Engineer, Backend Systems to help push the boundaries of what&#39;s possible.</p>
<p>If you love designing and scaling complex backend systems, have shipped major projects or entire products, and can think fluently about distributed systems, data modeling, API design, and integrations, this role is for you.</p>
<p>You&#39;ll architect and scale our Go/Postgres/Redis/GraphQL backend, working alongside world-class engineers and product minds to drive high-impact projects, lead critical design discussions, and collaborate directly with customers to shape a platform that&#39;s transforming supply chain, manufacturing, and beyond.</p>
<p>The industry is rooting for us, and you&#39;ll play a pivotal role in making it happen.</p>
<p>This is a high-autonomy, high-impact individual contributor role, but if you&#39;re interested in growing into people management, the opportunity is there depending on your interest and performance.</p>
<p>Product</p>
<p>We&#39;re building the AI-first ERP to replace decades-old giants like SAP and Oracle, transforming how enterprises operate.</p>
<p>Our platform can generate any enterprise workflow application in minutes,a dramatic leap from the 1-2 years it traditionally takes IT teams to build them,giving process owners in supply chain, manufacturing, and operations the power to standardize, streamline, and drive their work to completion, no matter the complexity.</p>
<p>This breakthrough is powered by our workflow, forms, and AI engines, as well as our in-house Large Tabular Model, a first-of-its-kind innovation.</p>
<p>Customers aren&#39;t just adopting our platform,they&#39;re clamoring for more, rapidly expanding their use cases as we enter an exhilarating growth phase.</p>
<p>As one user put it: “I’ve been waiting for this for 20 years.”</p>
<p>Culture and Compensation</p>
<p>We are a customer-obsessed, product-driven company that is building a flexible, hybrid/remote culture to enable the brightest minds in the industry.</p>
<p>We are particularly interested in candidates based in our hubs of Seattle, San Francisco and New York, but we will consider candidates who live anywhere in the US, Canada, or Mexico.</p>
<p>We have industry-leading compensation packages, including equity and health benefits.</p>
<p>We are willing to sponsor US work authorization if needed.</p>
<p>Requirements</p>
<ul>
<li><p>M.S. in Computer Science or a related field (B.S. in Computer Science or a related field will be considered with substantial relevant experience)</p>
</li>
<li><p>5+ years of industry experience as a backend software engineer, with a focus on large-scale, user-facing web applications in companies like Slack, Uber, or similar</p>
</li>
<li><p>Proven experience in the architecture and design of large data systems, particularly for software as a service (SaaS)</p>
</li>
<li><p>Extensive experience in database systems development, data modeling, distributed systems and building robust application backends</p>
</li>
<li><p>Fluency with databases, APIs and modern backend technologies (experience with Go and GraphQL is strongly preferred, with the ability to quickly learn new technologies as needed)</p>
</li>
<li><p>A builder&#39;s spirit (you have a track record of building projects for fun, staying updated with open-source developments, etc.)</p>
</li>
<li><p>Ability to lead projects independently and collaboratively in a fast-paced startup environment</p>
</li>
<li><p>Excellent written and verbal communication skills</p>
</li>
<li><p>Strong enthusiasm for continuous learning and professional growth and for mentoring peers to help them grow as engineers</p>
</li>
</ul>
<p>Responsibilities</p>
<ul>
<li><p>Architect, design and develop scalable and robust backend systems for large data software-as-a-service (SaaS) applications, ensuring high performance and reliability.</p>
</li>
<li><p>Collaborate with cross-functional teams including Design, Product Management and industry experts to build high-quality product features.</p>
</li>
<li><p>Lead and mentor a team of engineers, providing guidance and expertise in backend development, database systems and distributed systems.</p>
</li>
<li><p>Stay abreast of emerging technologies and industry trends, incorporating new developments into the backend architecture and processes where appropriate.</p>
</li>
<li><p>Participate in code reviews, technical discussions and decision-making processes to maintain high standards of code quality and best practices.</p>
</li>
<li><p>Drive the adoption of best practices in backend development, data modeling and API design, ensuring the scalability and maintainability of the system.</p>
</li>
<li><p>Champion a culture of innovation, encouraging and leading initiatives to explore new technologies and improve existing systems.</p>
</li>
</ul>
<p>Additional Information</p>
<p>If this sounds exciting, please apply and we&#39;ll get back to you promptly if we see a fit!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>USD 175,000-200,000 per year</Salaryrange>
      <Skills>M.S. in Computer Science or a related field, 5+ years of industry experience as a backend software engineer, Proven experience in the architecture and design of large data systems, Extensive experience in database systems development, data modeling, distributed systems and building robust application backends, Fluency with databases, APIs and modern backend technologies, Go, GraphQL, Postgres, Redis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Regrello</Employername>
      <Employerlogo>https://logos.yubhub.co/regrello.com.png</Employerlogo>
      <Employerdescription>Regrello is a 60-person startup that is reimagining automation in supply chains and manufacturing.</Employerdescription>
      <Employerwebsite>https://regrello.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/regrello/3115193b-6b7b-4e03-bfcd-8bfab06e6e55</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2324ce80-532</externalid>
      <Title>Data Scientist - Network Value</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>The Network Value Data Science team is helping Plaid build an industry leading fintech consumer network by increasing access to, authorization for, and usability of Plaid&#39;s User&#39;s financial footprints. We embed within product teams to support OKRs and help execute on product roadmaps. We translate ambiguous product questions into tractable analysis, serve as analytical thought partners throughout the org, identify opportunities to build better products, and champion a data-first decision making approach everywhere we go.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Perform ad-hoc and strategic analyses to uncover opportunities for improved business outcomes and translate complex questions into actionable analytics projects.</li>
<li>Design and maintain scalable data models and dashboards that increase visibility into core systems and drive operational excellence.</li>
<li>Build and iterate on machine learning prototypes to power insight-driven products and unlock new sources of customer and business value.</li>
<li>Define and track OKRs that quantify progress toward key business goals, ensuring alignment and accountability across teams.</li>
<li>Design and analyze experiments to guide product decisions and optimize feature launches.</li>
<li>Champion a data-first culture by promoting analytical rigor and evidence-based decision-making across the organization.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>2+ years of experience as a Data Scientist or in a related analytics or data-focused role</li>
<li>Strong track record of turning complex data into strategic insights and measurable business impact</li>
<li>Proven ability to use experimentation, advanced analytics, and data storytelling to uncover opportunities that drive key product and business outcomes</li>
<li>Strong technical foundation in SQL and Python for large-scale analysis, data modeling, and ML prototyping</li>
<li>Experience developing and maintaining data pipelines and metrics frameworks using tools such as Airflow and dbt</li>
<li>Background working with complex backend systems, ensuring data integrity, scalability, and operational reliability across platforms</li>
<li>Skilled at partnering cross-functionally with product, engineering, and business teams to influence prioritization and strategy through clear, data-driven communication</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable. We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$176,400-$243,600 per year</Salaryrange>
      <Skills>SQL, Python, Machine Learning, Data Modeling, Data Pipelines, Metrics Frameworks, Airflow, dbt</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a fintech company that builds tools and experiences for developers to create their own products, connecting financial accounts to apps and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/18503c02-17a0-4c47-98c8-155b0b6ccc2a</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>571471d0-f38</externalid>
      <Title>Product Lead - Growth</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>As Product Lead - Growth, you will oversee one of Plaid&#39;s highest-breadth portfolios. You&#39;ll lead PMs across Growth, Web, and Customer Foundations, setting strategy for funnel optimization, down-market acquisition, enterprise enablement, and the customer lifecycle model. You will partner deeply with Engineering, Design, BI/DS, Marketing, Sales, and Partnerships to shape growth motions across all GTM channels.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead PMs across Growth, Web, and Customer Foundations.</li>
<li>Define the end-to-end growth strategy across website, PLG, SMB, enterprise, and partnerships.</li>
<li>Own core funnel performance: traffic → MQL → onboarding → activation.</li>
<li>Drive scalable SMB and long-tail customer acquisition motions.</li>
<li>Partner with Sales and Partnerships to support large enterprise deals, identify product gaps, and shape deal mechanics.</li>
<li>Oversee onboarding risk engine and customer compliance workflows.</li>
<li>Build and operationalize Plaid’s Customer Lifecycle Model (CLM) to ensure clean customer state and entitlement logic.</li>
<li>Use data to identify funnel opportunities, segment users, validate hypotheses, and drive experimentation.</li>
<li>Make resource allocation and prioritization decisions across a high-breadth portfolio.</li>
<li>Communicate strategy, tradeoffs, and insights to Executive, GTM, and Engineering leadership</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>8-10+ years of product management experience, including 2+ years managing PMs.</li>
<li>Demonstrated success leading PLG or B2B growth at scale.</li>
<li>Strong analytical and funnel optimization skills; comfort with data modeling and experimentation.</li>
<li>Deep experience partnering with GTM teams (Sales, Partnerships, Marketing).</li>
<li>Experience supporting enterprise deals or partnership motions.</li>
<li>Demonstrated ability to operate across multiple domains: growth, risk, compliance, onboarding, internal systems.</li>
<li>Excellent communication and cross-functional leadership skills</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-329,400 per year</Salaryrange>
      <Skills>product management, growth strategy, funnel optimization, data modeling, experimentation, cross-functional leadership, fintech, onboarding, risk, compliance</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a financial technology company that provides tools and experiences for developers to create their own products. It has a network covering 12,000 financial institutions across the US, Canada, UK and Europe.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/b87c595b-b231-4a86-bffa-450e3f9dc335</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7b750523-8ff</externalid>
      <Title>Staff Software Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to lead the technical strategy and implementation of our enterprise data architecture, governance foundations, and analytics enablement tooling.</p>
<p>In this role, you will be the primary engineering counterpart to the Senior Product Manager for Data Enablement &amp; Governance, jointly shaping the roadmap for enterprise analytics, shared definitions, and the tools that help Omada answer questions faster and more reliably.</p>
<p>You will design and evolve core data products, define patterns and standards used across the company, and drive the technical execution of initiatives that ensure our metrics, reports, and data products are scalable, governed, and trustworthy.</p>
<p>This is a high-impact, cross-functional Staff role working across Data Engineering, Data Science, Analytics, Product, IT, and business leaders.</p>
<p><strong>Key Responsibilities:</strong></p>
<p><strong>Enterprise Data Architecture</strong></p>
<ul>
<li>Own the vision and technical roadmap for Omada&#39;s enterprise data architecture, spanning ingestion, storage, modeling, and serving layers for analytics and applied statistics use cases.</li>
<li>Design, implement, and evolve scalable, secure, and cost-efficient data solutions (datalakes, warehouses, marts, semantic layers) that support governed, cross-functional analytics and self-service.</li>
<li>Define and socialize architectural patterns, data contracts, and integration standards used by data and product teams across the organization.</li>
<li>Anticipate future needs (e.g., new product lines, new modalities, AI/ML workloads) and drive proactive architectural changes rather than reacting to incidents or point-in-time requests.</li>
</ul>
<p><strong>Data Modeling, Quality, and Governance Foundations</strong></p>
<ul>
<li>Lead the design of logical and physical data models to support enterprise metrics, dashboards, and ad hoc analytics, with a focus on reusability and clear ownership.</li>
<li>Implement robust data quality, validation, and monitoring frameworks that underpin trusted “single source of truth” definitions for core concepts (e.g., active member, MAU, GLP-1 member).</li>
<li>Partner with the Senior Product Manager, Data Enablement &amp; Governance to translate governance decisions (definitions, ownership, change-management processes) into concrete technical implementations in the data platform.</li>
<li>Set standards and review mechanisms to ensure new pipelines, marts, and reports align with enterprise definitions and governance policies.</li>
<li>Continuously improve performance, scalability, and cost-efficiency of data workflows and storage; lead deep dives and remediation for complex production issues.</li>
</ul>
<p><strong>Enterprise Data Products Lifecycle</strong></p>
<ul>
<li>In close partnership with the Senior PM, define and deliver core, reusable data products (e.g., engagement, clinical, financial, client, care delivery datasets) that power dashboards, reporting, and self-service analytics.</li>
<li>Co-Architect and implement technical foundations for AI-assisted analytics tools, governed semantic layers, and reporting applications that make analysts and business users more efficient.</li>
<li>Partner with Product and Engineering teams owning tools like Amplitude, Tableau, and internal reporting tools to ensure consistent instrumentation, mapping to enterprise definitions, and scalable access patterns.</li>
<li>Translate business and product requirements into resilient schemas, data services, and interfaces that are usable, maintainable, and auditable.</li>
<li>Ensure production data delivery meets defined SLAs and supports downstream BI, reporting apps, and applied statistics workloads.</li>
<li>Play a key role in cross-functional forums (e.g., Data Governance Committee, analytics communities) as the technical voice for feasibility, risk, and long-term platform health.</li>
</ul>
<p><strong>Technical Leadership, Mentorship, and Culture</strong></p>
<ul>
<li>Lead large, multi-team technical initiatives,from design to implementation and rollout,setting a high bar for design docs, reviews, and execution quality.</li>
<li>Mentor senior and mid-level engineers, elevating the team’s skills in data modeling, pipeline design, governance, and platform thinking.</li>
<li>Help shape playbooks for how product squads and spokes engage with central data teams on new metrics, data products, and applied stats projects.</li>
<li>Partner closely with Analytics, Data Science, Product, and business leaders to ensure data architecture and governance decisions are aligned with company OKRs and measurable business value.</li>
<li>Proactively identify complexity, duplication, and fragility in existing systems; drive simplification and standardization with sustainable solutions.</li>
<li>Model Omada’s values in day-to-day work, fostering a culture of trust, context-seeking, bold thinking, and high-impact delivery.</li>
</ul>
<p><strong>About You:</strong></p>
<ul>
<li>8+ years of experience building, maintaining, and orchestrating scalable data platforms and high-quality production pipelines, including significant experience in analytics or warehousing environments.</li>
<li>Demonstrated Staff-level impact: leading cross-team technical initiatives, making architectural decisions that shaped a multi-year roadmap, and influencing stakeholders beyond your immediate team.</li>
<li>Deep experience with cloud data ecosystems (e.g., AWS) and modern data warehouses (e.g., Redshift, Snowflake, BigQuery), including MPP query optimization.</li>
<li>Strong background in data modeling for OLTP and OLAP, and designing reusable data products for BI, reporting, and advanced analytics.</li>
<li>Hands-on experience implementing data quality, observability, and governance frameworks, ideally in a regulated or PHI/PII-sensitive environment.</li>
<li>Experience partnering with Product Management and Analytics to define and deliver platform capabilities, not just point solutions.</li>
</ul>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Strong proficiency in SQL (analytical and performance-tuned) and experience with relational and MPP databases.</li>
<li>Proficiency in at least one modern programming language used in data engineering (e.g., Python, Java, Scala) and comfort applying software engineering best practices (testing, CI/CD, code review).</li>
<li>Experience with workflow orchestration and data integration tools (e.g., Airflow) and event-driven or streaming patterns where appropriate.</li>
<li>Familiarity with BI and analytics tools (e.g., Tableau, Amplitude, or similar) and how they integrate with governed data layers.</li>
<li>Experience with data governance concepts (ownership, lineage, definitions, access controls) and their technical implementation in a modern data stack.</li>
<li>Familiarity with AI tools for development.</li>
</ul>
<p><strong>Communication &amp; Working Style:</strong></p>
<ul>
<li>Excellent communication and collaboration skills, with the ability to convey complex technical concepts to non-technical stakeholders.</li>
<li>Highly self-directed and comfortable operating in ambiguous, cross-functional problem spaces, creating clarity and direction where none exists.</li>
<li>Strong sense of ownership and bias for impact; you care about outcomes for members, customers, and internal users, not just elegant systems.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive salary with generous annual cash bonus</li>
<li>Equity grants</li>
<li>Remote first work from home culture</li>
<li>Flexible Time Off to help you recharge</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Cloud data ecosystems, Modern data warehouses, MPP query optimization, Data modeling, Data quality, Data governance, Workflow orchestration, Data integration, Event-driven or streaming patterns, BI and analytics tools, AI tools for development</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a healthcare technology company that provides digital therapeutics for chronic disease management.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7753330</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>dcb44b1b-ec9</externalid>
      <Title>Senior Data Analyst</Title>
      <Description><![CDATA[<p>At Neighbor, our vision is to bring communities together by solving our neighbors&#39; biggest challenges.</p>
<p>We&#39;re building the largest hyperlocal marketplace the world has seen. Our marketplace is already flourishing in all 50 states and we&#39;re just getting started!</p>
<p>As a Senior Data Analyst, you are an independent driver and architect, entrusted with solving Neighbor&#39;s most ambitious, complex, and ambiguous problems. You will leverage advanced data analytics to not only execute the data strategy, but also anticipate the challenges that lie ahead. You will bridge the gap between executive vision and technical implementation, pioneering robust data models and pipelines while steering executive-level data strategy and fundamentally shaping the company&#39;s data-driven business decisions with your insights.</p>
<p><strong>Primary Responsibilities</strong></p>
<ul>
<li>Help build a world-class Data &amp; Analytics team focused on data integrity, business intelligence, data-driven decisions, testing, accountability and research</li>
<li>Lead the design and implementation of our data modeling layers (Data Lakes/Warehouses), ensuring a &#39;single source of truth&#39; for a complex, two-sided marketplace</li>
<li>Beyond SQL, you will build and maintain robust ETL/ELT pipelines and deploy predictive models to forecast and automate insights</li>
<li>Mentor junior team members and empower non-technical stakeholders to make autonomous, data-informed decisions through world-class BI tooling</li>
<li>Move beyond &#39;what happened&#39; to &#39;what will happen.&#39; Develop statistical frameworks to test hypotheses and measure the impact of new product features</li>
<li>Act as the primary data partner for Product, Marketing, Sales, Engineering, Finance, and Customer Success leadership</li>
<li>Become an expert on all aspects of Neighbor&#39;s product, user, marketing and other data</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree in quantitative and/or technical fields (Math, Physics, Statistics, Economics, Computer Science, Engineering, etc.) OR 7 years working experience in data analytics</li>
<li>5+ years of prior experience as a Data Analyst</li>
<li>3+ years of experience solving complex, ambiguous problems independently and presenting results to stakeholders</li>
<li>Experience with ETL such as data manipulation, organization and cleaning</li>
<li>Experience modeling data lakes or data warehouses</li>
<li>Experience with predictive modeling</li>
<li>Advanced proficiency in reporting and visualization software</li>
<li>Experience using mathematical, scientific and statistical techniques to analyze data</li>
<li>Proven track record of using quantitative analysis to solve problems, and drive key business decisions</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>About Neighbor: Neighbor is the largest and most comprehensive marketplace for self storage and parking, with listings in almost every U.S. city. From storage facilities to neighborhood garages, driveways, and RV spots, Neighbor brings every option together in one simple search.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>ETL, data modeling, predictive modeling, reporting and visualization software, mathematical and statistical techniques, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Neighbor</Employername>
      <Employerlogo>https://logos.yubhub.co/neighbor.com.png</Employerlogo>
      <Employerdescription>Neighbor is a marketplace for self storage and parking, operating in almost every U.S. city.</Employerdescription>
      <Employerwebsite>https://www.neighbor.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/neighbor/034252c8-d93c-40a0-95dc-2830273acba0</Applyto>
      <Location>U.S.</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>782f7e7f-e0e</externalid>
      <Title>Revenue Technology - Data Strategy &amp; Operations Lead</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Data Strategy &amp; Operations leader to own the data foundations that power revenue execution. This role ensures that revenue data is reliable, interpretable, scalable, and usable as the business evolves and that teams can act on what they see with confidence.</p>
<p>In this role, you will report to the Head of Platforms &amp; Infrastructure and play a central role in shaping how Mercury models, governs, and operationalizes GTM data. You’ll partner closely with Data Engineering, Data Science, Solution Architecture, Platform Engineering, etc.</p>
<p>Some key responsibilities include:</p>
<ul>
<li>Owning the definition, structure, and reliability of data originating from revenue platforms (e.g., Salesforce, GTM tools, automation systems)</li>
<li>Serving as the primary decision owner for GTM-sourced tables and views used for revenue execution, forecasting inputs, lifecycle tracking, and signal-based workflows</li>
<li>Designing and evolving core GTM data models across Salesforce, ETL, and analytics layers</li>
<li>Partnering with Data Engineering to align GTM schemas with enterprise data models and define clear data contracts between source systems and downstream consumers</li>
<li>Partnering with Data Science / Analytics to ensure revenue data is interpretable, statistically sound, and reflects how the business actually operates</li>
<li>Owning clarity around data ownership boundaries, shared dependencies, and escalation paths when upstream or downstream changes impact revenue integrity</li>
<li>Defining and upholding data quality, freshness, consistency, and documentation standards for revenue platforms</li>
<li>Monitoring and improving pipeline reliability, performance, and scalability, proactively identifying fragile or redundant transformations</li>
<li>Identifying opportunities to automate manual or error-prone data workflows and reduce operational overhead</li>
<li>Acting as a data thought partner to Platforms &amp; Infrastructure, Revenue Operations, Analytics, and Security , advising on feasibility, tradeoffs, and sequencing for data-heavy initiatives</li>
</ul>
<p>You should have:</p>
<ul>
<li>7+ years of experience in data engineering or data systems roles within SaaS or technology companies</li>
<li>Deep experience designing and operating production data pipelines</li>
<li>Highly proficient in SQL and experienced in data modeling</li>
<li>Hands-on experience with modern data stacks (e.g., Snowflake, BigQuery, Redshift)</li>
<li>Experience with ETL / ELT tooling (e.g., dbt, Airflow, Census, or similar)</li>
<li>Understanding of Salesforce data models and common GTM system architectures</li>
<li>Ability to translate business concepts into durable, well-structured data models</li>
<li>Clear communication skills with both technical and non-technical partners</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience supporting revenue, sales, or customer lifecycle data</li>
<li>Familiarity with event-based data platforms (e.g., Data Cloud or equivalents)</li>
<li>Experience working alongside platform engineering and security teams</li>
<li>Exposure to data governance, access controls, and compliance considerations</li>
<li>Experience mentoring or guiding other data practitioners</li>
</ul>
<p>The total rewards package at Mercury includes base salary, equity, and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$142,600 - $198,000</Salaryrange>
      <Skills>SQL, data modeling, modern data stacks, ETL/ELT tooling, Salesforce data models, GTM system architectures, event-based data platforms, data governance, access controls, compliance considerations, mentoring/guiding other data practitioners</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury is a fintech company that provides banking services through Choice Financial Group and Column N.A.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5806201004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3048ccd4-7de</externalid>
      <Title>Data Analyst</Title>
      <Description><![CDATA[<p>We are seeking a Data Analyst to join our growing data team. As a Data Analyst at LayerZero, you will be at the forefront of shaping a rich data foundation for a company making a real impact in the web3 space. You will work closely with teams and leaders to uncover insights, drive decision-making, and fuel our next-generation products and services.</p>
<p>The successful candidate will dive headfirst into the world of crypto data, exploring on-chain wallets and contracts, block and transaction data, insights from in-house systems, and third-party intelligence. Your mission will be to combine these diverse datasets into rich, actionable data products for a broad group of stakeholders.</p>
<p>Key responsibilities include:
Leveraging and expanding our ever-growing Kimball dimensional model.
Writing SQL to create and expand insights in our in-house reporting solutions.
Collaborating with stakeholders across the organization to conduct ad-hoc explorations and analytics.
Being a key owner of data quality, building out insights that serve the data team itself.
Composing pipelines by writing SQL code to clean, combine, refine, and aggregate data into the insights the organization needs.
Collaborating on new datasets to ingest into our Snowflake data warehouse, working closely with data engineers on your team.
Not afraid of pushing code that supports tens of billions of dollars in daily transaction volume.</p>
<p>We are looking for someone with previous data analyst experience, likely with a bachelor&#39;s degree in Computer Science, Statistics, Mathematics, Physics or related field, but we also consider and highly value equivalent practical experience.</p>
<p>Required skills include strong SQL knowledge and experience, proven track record in data modeling, statistics, and analytics, experience working with a broad range of stakeholders, and strong convictions weakly held.
Nice to have skills include experience with general programming, experience with Snowflake, experience building DAG-based data pipelines, experience with streaming real-time data pipelines, previous experience with blockchain technologies, smart contracts, and decentralized finance, experience with Kimball dimensional modeling, and working on a mid-to-large scale data stacks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, data modeling, statistics, analytics, Snowflake, Kimball dimensional modeling, general programming, DAG-based data pipelines, streaming real-time data pipelines, blockchain technologies, smart contracts, decentralized finance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>LayerZero</Employername>
      <Employerlogo>https://logos.yubhub.co/layerzero.com.png</Employerlogo>
      <Employerdescription>LayerZero is a company founded in 2021, creating a community of cross-chain developers.</Employerdescription>
      <Employerwebsite>https://layerzero.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/layerzerolabs/jobs/5787956004</Applyto>
      <Location>Vancouver, BC</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8a8c0eb9-6e6</externalid>
      <Title>Data Scientist, Product</Title>
      <Description><![CDATA[<p><strong>Job Title: Data Scientist, Product</strong></p>
<p>This is the founding hire for product analytics at Hebbia. As a data scientist, you will define what our core product metrics are: what counts as an active user, what engagement actually means, what signals correlate with retention.</p>
<p>This is not a dashboarding role. The goal is to shape product decisions with data, not just report on them. You will identify which workflows drive repeat usage, where users drop off, what features move engagement, and what differentiates power users from casual users across our enterprise customer base.</p>
<p>The role sits at the intersection of analytics engineering, product analytics, and data science. You will build the infrastructure and do the analysis. Define the metrics, build the pipelines, create the dashboards, and use what you built to inform the roadmap.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define and implement Hebbia&#39;s core product metrics from scratch: active users, engagement, retention, feature adoption, account health. Build the canonical definitions the entire company uses.</li>
<li>Design and build the product analytics infrastructure: fact tables, clean data models, and the analytics layer that sits on top of our product data.</li>
<li>Build and maintain executive and product dashboards that leadership and product teams use to make decisions.</li>
<li>Write DAGs, transforms, and data pipelines that support analytics. Work with engineering to instrument the product so usage data is captured correctly.</li>
<li>Analyze customer behavior across our B2B customer base: account-level usage patterns, workflow adoption, expansion signals, and churn risk indicators.</li>
<li>Inform the product roadmap using data. Identify friction in user flows, surface feature adoption patterns, and highlight opportunities for product improvement.</li>
<li>Partner with product managers and engineers to translate product questions into measurable data and structured experiments.</li>
<li>Establish data quality standards and documentation so the metrics layer you build is trusted and maintained.</li>
</ul>
<p><strong>Who You Are</strong></p>
<ul>
<li>3+ years of experience in product analytics, analytics engineering, or data science at a B2B SaaS company or high-growth startup</li>
<li>Strong in SQL and Python. You can write production-quality transforms, not just ad hoc queries.</li>
<li>Experience with modern data stack tools: dbt, Airflow, Snowflake, BigQuery, or similar. You understand data modeling and warehouse architecture.</li>
<li>You have built dashboards and reporting that product teams and leadership actually use to make decisions</li>
<li>You understand B2B product analytics: account-level metrics, multi-user workflows, enterprise engagement patterns, and why B2B retention analysis is different from consumer</li>
<li>You translate ambiguous product questions into structured analyses. You do not wait for someone to hand you a spec.</li>
<li>Strong product intuition. You care about why users behave the way they do, not just what the numbers say.</li>
<li>Clear communicator. You can present findings to engineers, product managers, and executives with equal effectiveness.</li>
</ul>
<p><strong>Compensation</strong></p>
<p>The salary range for this position is set between $180,000 to $260,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate’s experience and qualifications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $260,000</Salaryrange>
      <Skills>SQL, Python, dbt, Airflow, Snowflake, BigQuery, data modeling, warehouse architecture, product analytics, analytics engineering, data science</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside. Founded in 2020, Hebbia powers investment decisions for major asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4670090005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>58df2f04-af4</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Data Engineer to join our Data Platform team to partner with our product and business stakeholders across risk, operations, and other domains. As a Data Engineer, you will be responsible for building robust data pipelines and engineering foundations by ingesting data from disparate sources, ensuring data quality and consistency, and enabling better business decisions through reliable data infrastructure across core product areas.</p>
<p>Your primary focus will be on building scalable data pipelines using Airflow to orchestrate data workflows that ingest, transform, and deliver data from various sources into Snowflake and Databricks. You will also design and implement data models in Snowflake that support analytics, reporting, and ML use cases with a focus on performance, reliability, and scalability.</p>
<p>In addition, you will develop infrastructure as code using Terraform to automate and manage cloud resources in AWS, ensuring consistent and reproducible deployments. You will monitor data pipeline health and implement data quality checks to ensure accuracy, completeness, and timeliness of data as business needs evolve.</p>
<p>You will also optimize data processing workflows to improve performance, reduce costs, and handle growing data volumes efficiently. Troubleshooting and resolving data pipeline issues, working through ambiguity to get to the root cause and implementing long-term fixes will be a key part of your role.</p>
<p>As a Data Engineer, you will bridge gaps between data and the business by working with cross-functional teams across the US and India office to understand requirements and translate them into robust technical solutions. You will create comprehensive documentation on data pipelines, data models, and infrastructure, keeping documentation up to date and facilitating knowledge transfer across the team.</p>
<p><strong>Requirements:</strong></p>
<ul>
<li>2+ years of data engineering experience with strong technical skills and the ability to architect scalable data solutions.</li>
</ul>
<ul>
<li>Hands-on experience with Python for data processing, automation, and building data pipelines.</li>
</ul>
<ul>
<li>Proficiency with workflow orchestration tools, preferably Airflow, including DAG development, task dependencies, and monitoring.</li>
</ul>
<ul>
<li>Strong SQL skills and experience with cloud data warehouses like Snowflake, including performance optimization and data modeling.</li>
</ul>
<ul>
<li>Experience with cloud platforms, preferably AWS (S3, Lambda, EC2, IAM, etc.), and understanding of cloud-based data architectures.</li>
</ul>
<ul>
<li>Experience working cross-functionally with data analysts, analytics engineers, data scientists, and business stakeholders to understand requirements and deliver solutions.</li>
</ul>
<ul>
<li>An ownership mentality – this engineer will be responsible for the reliability and performance of their data pipelines and expected to fully understand data flows, dependencies, and their implications on downstream users.</li>
</ul>
<p><strong>Nice to have:</strong></p>
<ul>
<li>Experience with dbt for transformation logic and analytics engineering workflows integrated with data pipelines.</li>
</ul>
<ul>
<li>Familiarity with Databricks for large-scale data processing, including Spark optimization and Delta Lake.</li>
</ul>
<ul>
<li>Experience with Infrastructure as Code (IaC) tools like Terraform for managing cloud resources and data infrastructure.</li>
</ul>
<ul>
<li>Knowledge of data modeling concepts (e.g., dimensional modeling, star/snowflake schemas, slowly changing dimensions).</li>
</ul>
<ul>
<li>Experience with CI/CD practices for data pipelines and automated testing frameworks.</li>
</ul>
<ul>
<li>Experience with streaming data and real-time processing frameworks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Python, SQL, Snowflake, Databricks, AWS, Terraform, data engineering, data pipelines, data modeling, dbt, Infrastructure as Code, CI/CD, streaming data, real-time processing</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Greenlight</Employername>
      <Employerlogo>https://logos.yubhub.co/greenlight.com.png</Employerlogo>
      <Employerdescription>Greenlight is a family fintech company that provides a banking app for families, serving over 6 million parents and kids.</Employerdescription>
      <Employerwebsite>https://www.greenlight.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/greenlight/e98d9733-8b8c-4ce4-997d-6cf14e35b2f3</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>369c1543-2c5</externalid>
      <Title>Senior Full Stack Engineer - Conversation Intelligence</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI. The future of work is here, and it&#39;s at Cresta.</p>
<p>The QM &amp; Coaching Team at Cresta is vital for our post-call intelligence products. Our core mission is to utilize Large Language Models (LLMs) and advanced AI techniques to transform conversations into actionable intelligence, streamlining the process of agent performance review and coaching. We build systems that deeply analyze agent behaviors, delivering structured insights into their performance and pinpointing critical areas for improvement. Through AI-powered exploration and interactive workflows, our work empowers organizations to leverage data-driven decisions and enhance the overall customer experience.</p>
<p>As a Senior Fullstack Engineer, you’ll play a key role in building and scaling the no-code platform that powers Cresta’s processing capabilities. This platform empowers non-technical users to configure conversation workflows, apply automation without writing code. You will work across the stack,from building intuitive UIs to robust backend services,enabling customers to unlock value from conversations quickly and flexibly.</p>
<p>You’ll partner with designers, product managers and AI/ML teams to turn complex requirements into delightful and performant product experiences. You’ll also help shape the future of our no-code architecture, ensuring it scales with our product and customer base.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and maintain end-to-end features for Cresta’s no-code processing platform.</li>
<li>Build intuitive UI components and visual editors for configuring conversation logic and workflows.</li>
<li>Architect and implement backend services and APIs to power a dynamic no-code interface.</li>
<li>Work closely with ML engineers to expose conversation intelligence in an accessible and configurable way.</li>
<li>Develop data models and storage layers using Postgres, ClickHouse, and Elasticsearch.</li>
<li>Identify areas for performance improvements and scalability in both frontend and backend systems.</li>
<li>Ensure reliability, security, and maintainability across the full technology stack.</li>
<li>Participate in design discussions, code reviews, and continuous integration processes.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Proven experience as a Senior Fullstack Engineer, with strong frontend and backend contributions to no-code or low-code platforms.</li>
<li>Deep understanding of modern frontend technologies (React, TypeScript, etc.) and design patterns for building complex UIs.</li>
<li>Backend expertise in Python, Go, or similar languages; experience with RESTful APIs and microservices architecture.</li>
<li>Strong foundation in database systems and data modeling; hands-on experience with Postgres, ClickHouse, or Elasticsearch is a plus.</li>
<li>Experience building platforms or tools for non-technical users, especially in AI/ML, automation, or workflow spaces.</li>
<li>Familiarity with managing state, execution logic, or visual programming paradigms is a bonus.</li>
<li>Strong collaboration and communication skills,able to work effectively across product, design, and engineering teams.</li>
<li>Passion for building systems that simplify complex problems and empower users.</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family</li>
<li>Flexible PTO to take the time you need, when you need it</li>
<li>Paid parental leave for all new parents welcoming a new child</li>
<li>Retirement savings plan to help you plan for the future</li>
<li>Remote work setup budget to help you create a productive home office</li>
<li>Monthly wellness and communication stipend to keep you connected and balanced</li>
<li>In-office meal program and commuter benefits provided for onsite employees</li>
</ul>
<p>Compensation at Cresta:</p>
<ul>
<li>Cresta’s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table.</li>
<li>The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</li>
<li>OTE Range: $205,000–$270,000 + Offers Equity</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Modern frontend technologies (React, TypeScript, etc.), Design patterns for building complex UIs, Backend expertise in Python, Go, or similar languages, RESTful APIs and microservices architecture, Database systems and data modeling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that provides a platform for contact centers to discover customer insights and behavioral best practices, automate conversations and inefficient processes, and empower every team member to work smarter and faster.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5026013008</Applyto>
      <Location>United States (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e231d72c-b82</externalid>
      <Title>Senior Software Engineer, Backend (Berlin)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the contact center workforce with AI. As a Senior full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>
<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse, and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p><strong>Qualifications We Value:</strong></p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p><strong>Perks &amp; Benefits:</strong></p>
<ul>
<li>Paid parental leave to support you and your family</li>
<li>Monthly Health &amp; Wellness allowance</li>
<li>Work from home office stipend to help you succeed in a remote environment</li>
<li>Lunch reimbursement for in-office employees</li>
<li>PTO: 28 days in Germany</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center. It was born from the prestigious Stanford AI lab.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4668107008</Applyto>
      <Location>Berlin, Germany (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1c431665-20b</externalid>
      <Title>Data Governance and Management Lead</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto. We are seeking a Data Governance &amp; Management Lead within the Global Analytics team to help develop and implement data controls, data quality standards, and governance practices across the platform.</p>
<p>This role supports data integrity, metadata, and access controls to help ensure data is accurate, consistent, and fit for purpose. This is a hands-on role that requires strong technical fluency, structured problem-solving, and the ability to translate governance requirements into practical implementations within data systems.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Working knowledge of data governance, data management, and data quality frameworks</li>
<li>Experience supporting the implementation of data controls within data pipelines and reporting systems</li>
<li>Advanced proficiency in SQL, Python, or other data query and analysis tools</li>
<li>Proficiency with business intelligence and data visualization tools such as Looker, Power BI, or Tableau</li>
<li>Experience with database design, including understanding complex data schemas and data extraction</li>
<li>Familiarity with data lineage, metadata management, and data modeling concepts</li>
<li>Ability to define and implement data quality rules and validation checks</li>
<li>Understanding of data access principles, including role-based access and data classification</li>
<li>Ability to document data processes and controls clearly and in a structured way</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Oversee the data governance program, identify improvement areas, and implement best practices to enhance data quality, integrity, and security</li>
<li>Develop and implement data quality standards and monitoring processes, including establishing data quality metrics and thresholds</li>
<li>Assist in managing the data issue lifecycle, including tracking and supporting remediation efforts</li>
<li>Manage the data governance platform (Atlan) and serve as the primary subject matter expert</li>
<li>Assist in data classification efforts, including identifying and categorizing sensitive data and critical data elements</li>
<li>Manage external data requests, including regulatory inquiries, ensuring compliance with banking regulations</li>
<li>Monitor and report on key data governance metrics and KPIs, providing insights and recommendations to senior management</li>
<li>Lead data governance meetings and workshops, facilitating discussions and decision-making to drive the data governance program forward</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Have a deep understanding of Anchorage Digital’s strategy and business lines.</li>
<li>Understand how data supports decision-making and operational processes across the organization</li>
<li>Possess strategic thinking and vision, with the ability to develop and implement a comprehensive data governance strategy aligned with organizational goals and objectives</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Able to communicate complex issues clearly and credibly to a wide range of audiences.</li>
<li>Document data processes, controls, and findings clearly for internal stakeholders</li>
<li>Build effective relationships and rapport with stakeholders, including cross-functional and external partners</li>
<li>Communicate, organize, and execute cross-team goals and projects, leveraging relationships and resources to solve problems</li>
<li>Collaborate with Data Platform, InfoSec, Product, and Engineering partners</li>
</ul>
<p><strong>You may be a fit for this role if you have:</strong></p>
<ul>
<li>Bachelor’s degree required. Advanced degrees or certifications in data analytics or governance preferred</li>
<li>4–7 years of experience in data governance, data management, data quality, or data analytics</li>
<li>Hands-on experience implementing or supporting data quality and governance practices</li>
<li>Experience managing data classification, access controls, and external data requests</li>
<li>Experience working with data pipelines, reporting systems, or analytical datasets</li>
<li>Experience writing, editing, or reviewing technical documentation for regulatory or banking contexts</li>
<li>Strong attention to detail, with a focus on accuracy, completeness, and consistency in data governance processes and controls</li>
<li>Ability to work independently on defined tasks and contribute to team objectives</li>
<li>Strong problem-solving skills and comfort working in structured, detail-oriented environments</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>You&#39;ve kept up to date with the proliferation of blockchain and crypto innovations.</li>
<li>You were emotionally moved by the soundtrack to Hamilton, which chronicles the founding of a new financial system. :)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data governance, data management, data quality frameworks, SQL, Python, Looker, Power BI, Tableau, database design, data lineage, metadata management, data modeling, data access principles, role-based access, data classification</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/5bfbd64c-933e-418c-9c07-5aea50212c0d</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3d849fbc-058</externalid>
      <Title>Member of Product, Data Platform</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.</p>
<p>The Data Platform team is the backbone of Anchorage Digital&#39;s information infrastructure. As data becomes the lifeblood of every product, compliance workflow, and client-facing report we produce, this team is responsible for building and operating a unified, scalable, and reliable data platform that serves the entire organization.</p>
<p>As a Data Platform Product Manager, you will own the strategy and execution for centralizing and formalizing the company&#39;s data infrastructure , spanning internal operational data, transaction and blockchain data, customer data, and external data sources.</p>
<p>Your mission is to transform a fragmented data landscape into a single source of truth that powers mission-critical reporting, business insights, and downstream product experiences across every team at Anchorage.</p>
<p>This is a force-multiplier role. Your work will elevate the quality, speed, and reliability of every product and team at the company.</p>
<p>You will define the standards, build the platform, and create the foundation that enables Anchorage to scale with confidence.</p>
<p>If you thrive at the intersection of complex data systems, cross-functional influence, and platform thinking, this is your opportunity to have outsized impact at a category-defining company in digital assets.</p>
<p>Below, we define our Factors of Growth &amp; Impact to help Anchorage Villagers measure their impact and articulate feedback, coaching, and the rich learning that happens while exploring, developing, and mastering capabilities within and beyond the Member of Product, Data Platform role:</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Own the detailed prioritization of the data platform roadmap, balancing foundational infrastructure work, new capabilities, and technical debt.</li>
<li>Demonstrate deep strategic thinking in shaping the platform roadmap, considering the unique data challenges of digital assets, blockchain protocols, and regulated financial services.</li>
<li>Deliver complex, cross-functional projects with multiple dependencies across engineering, analytics, compliance, and operations teams.</li>
<li>Work closely with engineering and data science counterparts to drive product development processes, sprint planning, and architectural decisions.</li>
<li>Ability to understand and reason about system architecture , including data warehousing, ETL/ELT pipelines, streaming vs. batch processing, and modern data stack components , and communicate clear requirements to engineering.</li>
<li>Drive comprehensive go-to-market strategy for internal platform adoption, including defining success metrics, tracking KPIs around data quality and platform usage, and iterating based on data-driven insights.</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Lead and influence cross-functional teams while maintaining strong stakeholder relationships across the entire organization , from engineering to finance to compliance.</li>
<li>Exercise independent decision-making and take full ownership of data platform strategy and execution.</li>
<li>Contribute strategic insights that significantly impact company direction, operational efficiency, and product quality.</li>
<li>Demonstrate platform leadership that elevates the performance and effectiveness of every team that depends on data.</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Develop deep understanding of Anchorage&#39;s business model, product suite, regulatory environment, and organizational structure.</li>
<li>Build and maintain strong relationships with stakeholders across all departments to ensure the data platform serves the company&#39;s most critical needs.</li>
<li>Navigate and improve organizational data practices to enhance efficiency, compliance, and decision-making.</li>
<li>Drive company objectives through strategic data platform decisions and initiatives.</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Effectively influence and motivate teams across the organization to adopt platform standards and invest in data quality, even when those teams do not report to you.</li>
<li>Enable cross-functional collaboration through clear, consistent communication about platform capabilities, timelines, and data governance expectations.</li>
<li>Act as a thoughtful knowledge partner to senior leadership, translating complex data infrastructure topics into clear business impact.</li>
<li>Proactively communicate platform goals, status updates, and data health metrics throughout the organization.</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5+ years of product management experience, with significant time spent on data platforms, data infrastructure, or data-intensive enterprise products.</li>
<li>Proven experience building or scaling enterprise data platforms , including data warehousing, data lakes, ETL/ELT pipelines, or modern data stack tooling (e.g., Snowflake, Databricks, dbt, Airflow, Spark).</li>
<li>Strong understanding of data modeling, data governance, and data quality frameworks.</li>
<li>Experience working with diverse data types , including transactional data, customer data, financial data, and ideally blockchain or on-chain data.</li>
<li>Track record of driving cross-functional alignment and adoption for internal platform products where you must influence without direct authority.</li>
<li>Exceptional written and verbal communication skills, with the ability to convey complex data architecture concepts to both technical and non-technical audiences.</li>
<li>Your empathy and adaptability not only complement others&#39; working styles but also embody our culture of curiosity, creativity, and shared understanding.</li>
<li>You self describe as some combination of the following: creative, humble, ambitious, detail oriented, hard working, trustworthy, eager to learn, methodical, action oriented, and tenacious.</li>
</ul>
<p><strong>Although not a requirement, bonus points if you have:</strong></p>
<ul>
<li>You have hands-on experience with blockchain data indexing, onchain analytics, or crypto-native data infrastructure.</li>
<li>You have built data platforms that serve both internal analytics consumers and external client-facing products (reports, statements, dashboards).</li>
<li>You have experience supporting clients with data-related issues or concerns.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, data infrastructure, data-intensive enterprise products, data warehousing, data lakes, ETL/ELT pipelines, modern data stack tooling, Snowflake, Databricks, dbt, Airflow, Spark, data modeling, data governance, data quality frameworks, blockchain or on-chain data</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/0e730f61-a2e4-4152-8277-3f6383cc69a6</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>79e9796a-15c</externalid>
      <Title>Tech Strategy Product Manager, Google DeepMind</Title>
      <Description><![CDATA[<p>You will support the analytical and operational frameworks that enable Leadership (SVP+) decision forums to govern compute utilization and planning. This involves translating complex data into strategic insights that align DeepMind and Google&#39;s broader AI strategy.</p>
<p>As a Tech Strategy Product Manager, you will conduct rigorous, deep-dive investigations into Google&#39;s hardest long-range strategic questions, providing data-driven clarity needed for executive decision-making. You will model complex scenarios and trade-offs to surface opinionated strategies for maximizing AI value across Google.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Investigative Modeling: Conducting deep-dive investigations into Google&#39;s hardest long-range strategic questions, providing data-driven clarity needed for executive decision-making.</li>
<li>Scenario Architecture: Modelling complex scenarios and trade-offs to surface opinionated strategies for maximizing AI value across Google.</li>
<li>SVP+ Content Support: Contributing to the development of high-stakes presentations, communications, and mandates for SVP-level forums, ensuring technical accuracy and narrative flow.</li>
<li>Cross-Functional Partnership: Acting as a bridge between technical teams and strategy, partnering with virtual teams across Google to ensure product and policy plans are grounded in reality.</li>
<li>Execution &amp; Monitoring: Helping maintain per-mandate monitoring views to keep programs accountable; collaborating with MLSA to ensure company-wide mandates are organized and delivering on goals.</li>
<li>Strategic Cataloging: Maintaining a &#39;living catalog&#39; of ongoing experiments and proposed innovations across Google to identify topics worthy of senior executive discussion.</li>
</ul>
<p>This role is for you if you demonstrate the analytical rigor and strategic curiosity required to sustain a high level of performance across complex, pan-Google workstreams,pairing a &#39;big picture&#39; perspective with the ability to remain precise and detail-oriented at the speed of an SVP-led forum.</p>
<p>You should have a BA/BS degree or equivalent practical experience in a technical and/or business area, with 5+ years of experience in product management, management consulting, or a high-growth corporate strategy role. Exceptional organisational talent and experience running high-velocity, high-impact programs are also essential.</p>
<p>In addition, familiarity with the role of compute and data in building LLMs and the technical capabilities of these models would be an advantage.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>machine learning, compute infrastructure, large-scale data modeling, product management, management consulting, corporate strategy, familiarity with LLMs, technical capabilities of LLMs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7535915</Applyto>
      <Location>Mountain View, California, US</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>0047e1c5-08d</externalid>
      <Title>Backend Engineer, Forward Deployed Engineering</Title>
      <Description><![CDATA[<p>As a Backend Engineer on the Forward Deployed Engineering team at Stripe, you will work alongside AI agents to serve users at scale. This involves maintaining real-time integration maps, running shadow tests against user setups, and performing automated state reconciliation between Stripe and user systems. Your job is the work that requires an engineer: making judgment calls on ambiguous problems, building relationships with user engineering teams, making product decisions, and designing solutions.</p>
<p>You will engage directly with users to understand their revenue, billing, and payments requirements. You will translate what you learn into technical solutions and bring that user reality back to product teams. This is a genuinely user-facing role, not user-facing in the &quot;I read a dashboard&quot; sense.</p>
<p>You will build across product boundaries, designing and deploying products and solutions that address product-market fit gaps, not just in Billing but across multi-product boundaries (Payments + Invoicing + Global LPMs). You will embed within Stripe product engineering teams to co-develop the highest-leverage capabilities.</p>
<p>You will build reusable solutions, not one-off fixes. You will contribute to a customization framework for RFA and adjacent products: tailored billing logic, financial workflows, integrations (custom metering, product catalog integrations, checkout flows). You will build patterns and blueprints that scale beyond the individual engagement.</p>
<p>You will provide architectural guidance, reviewing user architectures, advising on best practices, and optimizing integration and performance for complex enterprise environments. You will contribute to a growing library of architectural patterns for the field.</p>
<p>You will resolve critical technical challenges, diagnosing and fixing complex product/engineering problems across the stack. You will identify systemic improvements that prevent recurrence and improve platform stability.</p>
<p>You will inform the product roadmap, the integration gaps, migration friction, and multi-product failures you surface directly shape Stripe&#39;s product strategy. You will advocate for what users actually need based on what you&#39;ve seen firsthand.</p>
<p>You will raise the bar on engineering, improving engineering standards, tooling, and processes within the team. You will help build for sustainability as the team grows.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>5+ years of experience in software engineering, with a strong focus on backend systems, Proven ability to design, build, and maintain highly available, scalable, and secure systems, Strong command of distributed systems, API design, and data modeling, Excellent problem-solving skills and the ability to quickly grasp complex technical and business domains, Clear communicator, both written and verbal, with technical and non-technical stakeholders including external users, Experience with financial automation or billing products (e.g., Stripe Billing, Tax, Revenue Recognition, or similar), Experience with multi-product integration: stitching together payments, invoicing, billing, and related systems, Familiarity with extensibility models, custom solution frameworks, or platform development, Experience working with large enterprise users or in a customer-facing engineering role, Prior experience in a fast-paced, ambiguous environment where priorities shift based on user needs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7249744</Applyto>
      <Location>N/A</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>a78d4df3-eea</externalid>
      <Title>Tech Strategy Product Manager, Google DeepMind</Title>
      <Description><![CDATA[<p>You will support the analytical and operational frameworks that enable Leadership (SVP+) decision forums to govern compute utilization and planning. As a Tech Strategy Product Manager at Google DeepMind, you will be responsible for translating complex data into strategic insights that align DeepMind and Google&#39;s broader AI strategy.</p>
<p>About Us</p>
<p>Artificial Intelligence could be one of humanity&#39;s most useful inventions. At Google DeepMind, we&#39;re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence.</p>
<p>Key Responsibilities</p>
<ul>
<li>Conduct rigorous, deep-dive investigations into Google&#39;s hardest long-range strategic questions, providing the data-driven clarity needed for executive decision-making.</li>
<li>Model complex scenarios and trade-offs to surface opinionated strategies for maximizing AI value across Google.</li>
<li>Contribute to the development of high-stakes presentations, communications, and mandates for SVP-level forums, ensuring technical accuracy and narrative flow.</li>
<li>Act as a bridge between technical teams and strategy, partnering with virtual teams across Google to ensure product and policy plans are grounded in reality.</li>
<li>Help maintain per-mandate monitoring views to keep programs accountable; collaborate with MLSA to ensure company-wide mandates are organized and delivering on goals.</li>
<li>Maintain a &#39;living catalog&#39; of ongoing experiments and proposed innovations across Google to identify topics worthy of senior executive discussion.</li>
</ul>
<p>About You</p>
<p>You are a Product Manager who thrives in high-velocity, high-intensity environments where the stakes are massive and the pace is relentless. You are an execution-oriented IC who can quickly create &#39;something out of nothing,&#39; iterate based on feedback from senior leaders, and move immediately to the next challenge.</p>
<p>This role is for you if:</p>
<ul>
<li>You demonstrate the analytical rigor and strategic curiosity required to sustain a high level of performance across complex, pan-Google workstreams,pairing a &#39;big picture&#39; perspective with the ability to remain precise and detail-oriented at the speed of an SVP-led forum.</li>
<li>You are a master of execution,you don&#39;t just point at problems; you build the models (often joining together disparate data sources from multiple teams) and write the documents that solve them.</li>
<li>You are an expert at synthesis, possessing the communication skills to distill complex &#39;noise&#39; into actionable &#39;signal&#39; for executive audiences, delivering elite-level work even under the most demanding deadlines.</li>
</ul>
<p>Skills and Experience</p>
<ul>
<li>BA/BS degree or equivalent practical experience in a technical and/or business area.</li>
<li>5+ years of experience in product management, management consulting, or a high-growth corporate strategy role.</li>
<li>Exceptional organizational talent and experience running high-velocity, high-impact programs.</li>
<li>Strong analytical fluency: Deep experience in a role that deals with machine learning, compute infrastructure, or large-scale data modeling.</li>
<li>Innovative problem solver who can turn ambiguous, messy problem spaces into clear, executable solutions.</li>
</ul>
<p>In addition, the following would be an advantage:</p>
<ul>
<li>Familiarity with the role of compute and data in building LLMs and the technical capabilities of these models.</li>
<li>A proven track record of &#39;managing up&#39; and supporting senior stakeholders in a complex, matrixed business environment.</li>
<li>Experience in corporate planning or product operations within a major tech ecosystem.</li>
<li>Ability to manage multiple, concurrent, highly complex workstreams with minimal friction.</li>
<li>A &#39;bias towards action&#39; and a history of creative problem-solving in fast-paced settings.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Machine Learning, Compute Infrastructure, Large-Scale Data Modeling, Product Management, Management Consulting, Corporate Strategy</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a research and development subsidiary of Alphabet Inc., focusing on artificial intelligence.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7535915</Applyto>
      <Location>Mountain View, California, US</Location>
      <Country></Country>
      <Postedate>2026-03-16</Postedate>
    </job>
    <job>
      <externalid>afa2aeaa-57c</externalid>
      <Title>BIE II</Title>
      <Description><![CDATA[<p>Have you ever ordered a product from Amazon and been amazed at how fast it gets to you?</p>
<p>Every day Amazon engineers are relentlessly working to decrease the time between Click to Deliver for your products. The Amazon Fulfillment Technologies (AFT) team owns all of the software and infrastructure which powers Amazon&#39;s world-class fulfillment engine. Our team is building complex, massive data systems to capture data during every step in the automated pipeline and use that data to proactively predict efficiency and cost improvements to deliver the packages fast to our customers.</p>
<p>We are currently in search of a brilliant, self-driven, and seasoned BIE II to join our team. In this role, you will have the opportunity to work on building scalable solutions, including extensive data models and complex ETL pipelines and utilize your expertise to raise the bar on data timeliness, discoverability and availability of the same.</p>
<p><strong>Key Job Responsibilities</strong></p>
<ul>
<li>Own the development, and maintenance of ongoing metrics, reports, analyses, dashboards on the key drivers of our business</li>
<li>Partner with Product Managers and business teams to consult, develop and implement KPI’s, automated reporting solutions and infrastructure improvements to meet business needs</li>
<li>Develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support business needs</li>
<li>Perform both ad-hoc and strategic analyses</li>
<li>Strong verbal/written communication and presentation skills, including an ability to effectively communicate with both business and technical teams.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience</li>
<li>Experience with data visualization using Tableau, Quicksight, or similar tools</li>
<li>Experience with data modeling, warehousing and building ETL pipelines</li>
<li>Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift</li>
<li>Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets</li>
<li>Master&#39;s degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data analysis, data visualization, ETL pipelines, data modeling, SQL, Python, AWS solutions, data mining, large-scale databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Amazon</Employername>
      <Employerlogo>https://logos.yubhub.co/amazon.jobs.png</Employerlogo>
      <Employerdescription>Amazon is a multinational technology company that focuses on e-commerce, cloud computing, digital streaming, and artificial intelligence. It is one of the world&apos;s largest and most valuable companies.</Employerdescription>
      <Employerwebsite>https://amazon.jobs</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://amazon.jobs/en/jobs/3197763/bi-engineer-aft-bi</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>7af16166-8fd</externalid>
      <Title>FBS Senior Data Domain Architect</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p><strong>What to expect on your journey with us:</strong></p>
<ul>
<li>A solid and innovative company with a strong market presence</li>
<li>A dynamic, diverse, and multicultural work environment</li>
<li>Leaders with deep market knowledge and strategic vision</li>
<li>Continuous learning and development</li>
</ul>
<p><strong>Objective:</strong> Designs and develops Data/Domain IT architecture (integrated process, applications, data and technology) solutions to business problems in alignment with the Enterprise Architecture direction and standards.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Utilizes in-depth conceptual and practical knowledge in Domain Architecture and basic knowledge of related job disciplines to perform complex technical planning, architecture development and modification of specifications for Domain solution delivery.</li>
<li>Solves complex problems and partners effectively to execute broad, continuous Domain level architecture improvement roadmaps that impacts the organization.</li>
<li>Works independently, receives minimal guidance and direction to solve for and influence Enterprise and System architecture through Domain level knowledge.</li>
<li>Reviews high level design to ensure alignment to Solution Architecture.</li>
<li>May lead projects or project steps within a broader project or may have accountability for on-going activities or objectives.</li>
<li>Mentor developers and create reference implementations/frameworks.</li>
<li>Partners with System Architects to elaborate capabilities and features.</li>
<li>Delivers single domain architecture solutions and executes continuous domain level architecture improvement roadmap. Actively supports design and steering of a continuous delivery pipeline.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Over 6 years of experience as a senior domain architect for Data domains</li>
<li>Advanced English Level</li>
<li>Masters&#39; degree (PLUS)</li>
<li>Insurance Experience (PLUS) Financial Services (PLUS)</li>
</ul>
<p><strong>Technical &amp; Business Skills:</strong></p>
<ul>
<li>ETL/ELT Tools (Informatica, DBT) - Advanced (7+ Years)</li>
<li>Data Architecture / Data Modeling – Advanced (MUST)</li>
<li>Data Warehouse – Advanced (MUST)</li>
<li>Cloud Data Platforms - Advanced</li>
<li>Data Integration Tools – Advanced</li>
<li>Snowflake or Databricks - Intermediate (4-6 Years) MUST</li>
<li>Any Cloud - Intermediate (4-6 Years)</li>
<li>Power BI or Tableau - Intermediate (4-6 Years)</li>
<li>Data Science tools (Sagemaker, Databricks) - Intermediate (4-6 Years)</li>
<li>Data Lakehouse – Intermediate (MUST)</li>
</ul>
<ul>
<li>Data Governance - Intermediate</li>
<li>AI/ML - Entry Level (PLUS)</li>
<li>Master Data Management - Intermediate</li>
<li>Operational Data Management - Intermediate</li>
</ul>
<p><strong>Benefits:</strong></p>
<p>This position comes with a competitive compensation and benefits package.</p>
<ul>
<li>A competitive salary and performance-based bonuses.</li>
<li>Comprehensive benefits package.</li>
<li>Flexible work arrangements (remote and/or office-based).</li>
<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>
<li>Private Health Insurance.</li>
<li>Paid Time Off.</li>
<li>Training &amp; Development opportunities in partnership with renowned companies.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>ETL/ELT Tools (Informatica, DBT), Data Architecture / Data Modeling, Data Warehouse, Cloud Data Platforms, Data Integration Tools, Snowflake or Databricks, Any Cloud, Power BI or Tableau, Data Science tools (Sagemaker, Databricks), Data Lakehouse, Data Governance, AI/ML, Master Data Management, Operational Data Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global consulting and technology services company with nearly 350,000 employees across over 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/jdUFHSPZZjHsgd3TR4R3BS/remote-fbs-senior-data-domain-architect-in-colombia-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>5a4be76f-140</externalid>
      <Title>FBS Marketing Automation &amp; Integration Engineer</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>The team is responsible for architecting and maintaining scalable MarTech solutions, with a focus on data integration, customer journey orchestration, and marketing automation. This team operates within the Data, Tech, and Operations tower of the Direct BU.</p>
<p>The Marketing Automation &amp; Integration Engineer centers on the implementation and optimization of a MarTech data flow pattern involving Snowflake, Segment, Braze, and other SaaS platforms. Key responsibilities include:</p>
<ul>
<li>Design and maintain data pipelines between Snowflake, Segment CDP, Braze, and additional platforms</li>
<li>Implement real-time and batch data ingestion strategies</li>
<li>Manage customer event tracking and identity resolution within Segment</li>
<li>Orchestrate personalized marketing campaigns in Braze using dynamic segmentation and behavioral triggers</li>
<li>Ensure data integrity and feedback loops from Braze back into Snowflake via Segment</li>
<li>Automate data transformations and enrichment using scripting languages</li>
<li>Monitor system performance and troubleshoot integration issues across platforms</li>
</ul>
<p>This position comes with competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Segment CDP, Braze, Snowflake, Scripting Languages (Python / JS), Reverse ETL, Data Orchestration Platforms, Customer Data Schema Design, Data modeling and ETL/ELT Pipeline, API Integrations / Webhooks, Customer journey mapping and automation logic, Familiarity with insurance industry data and customer lifecycle models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a multinational consulting and professional services company that provides IT consulting, systems integration, and business process outsourcing services.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/qJr4ny8yGpdyCcPXUusbL6/remote-fbs-marketing-automation-%26-integration-engineer-in-brazil-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>7b03b30a-b20</externalid>
      <Title>FBS Senior Data Domain Architect</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. By combining international reach with US expertise, we build diverse and high-performing teams that are equipped to thrive in today’s competitive marketplace.</p>
<p>We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>Since we don’t have a local legal entity, we’ve partnered with Capgemini, which acts as the Employer of Record. Capgemini is responsible for managing local payroll and benefits.</p>
<p><strong>Objective:</strong> Designs and develops Data/Domain IT architecture (integrated process, applications, data and technology) solutions to business problems in alignment with the Enterprise Architecture direction and standards.</p>
<p>**Key Responsibilities:*</p>
<ul>
<li>Utilizes in-depth conceptual and practical knowledge in Domain Architecture and basic knowledge of related job disciplines to perform complex technical planning, architecture development and modification of specifications for Domain solution delivery.</li>
</ul>
<ul>
<li>Solves complex problems and partners effectively to execute broad, continuous Domain level architecture improvement roadmaps that impacts the organization.</li>
</ul>
<ul>
<li>Works independently, receives minimal guidance and direction to solve for and influence Enterprise and System architecture through Domain level knowledge.</li>
</ul>
<ul>
<li>Reviews high level design to ensure alignment to Solution Architecture.</li>
</ul>
<ul>
<li>May lead projects or project steps within a broader project or may have accountability for on-going activities or objectives.</li>
</ul>
<ul>
<li>Mentor developers and create reference implementations/frameworks.</li>
</ul>
<ul>
<li>Partners with System Architects to elaborate capabilities and features.</li>
</ul>
<ul>
<li>Delivers single domain architecture solutions and executes continuous domain level architecture improvement roadmap. Actively supports design and steering of a continuous delivery pipeline.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>ETL/ELT Tools (Informatica, DBT), Data Architecture / Data Modeling, Data Warehouse, Cloud Data Platforms, Data Integration Tools, Snowflake or Databricks, Any Cloud, Power BI or Tableau, Data Science tools (Sagemaker, Databricks), Data Lakehouse, Data Governance, Master Data Management, Operational Data Management, AI/ML</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with nearly 350,000 employees across over 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/1U952YA2QBa8zK7Tm5d3Lm/remote-fbs-senior-data-domain-architect-in-mexico-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>66f9fbf1-7df</externalid>
      <Title>FBS Business Solution Analyst (Data Analyst)</Title>
      <Description><![CDATA[<p>Join a collaborative, cross-functional team driving analytics solutions that improve business processes, enhance data insights, and support strategic initiatives across the organisation. As a Business Solution Analyst, you&#39;ll develop moderately complex, value-based analytics solutions to support quality, business functionality, and strategic initiatives. You&#39;ll work with cutting-edge technologies and influence decisions that shape the company&#39;s future.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop moderately complex, value-based analytics solutions to support quality, business functionality, and strategic initiatives</li>
<li>Collaborate with cross-functional teams to gather requirements and translate them into actionable analytics solutions</li>
<li>Create clear, detailed documentation and business plans addressing stakeholder needs</li>
<li>Act as a liaison between IT and Product Management for data solutions</li>
<li>Design and implement enterprise-wide data integration strategies using data marts and data warehouse applications</li>
<li>Continuously improve data platforms, dashboards, and reports to meet evolving business needs</li>
<li>Provide guidance on UAT, technical specifications, strategic planning, and training, with limited supervision</li>
<li>Apply an intermediate understanding of financial services data, structures, and cloud-based storage to ensure compliance while supporting business needs</li>
<li>Manage data from multiple sources and complete moderately complex projects using strong project management skills</li>
<li>Ensure data quality, troubleshoot issues, and design creative solutions to prevent or mitigate future problems</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s degree in a relevant field (Data Science, Statistics, Finance, Computer Science, or similar)</li>
<li>Background in insurance or financial services</li>
<li>Experience creating data visualizations using Power BI</li>
<li>Strong proficiency in Python for analytics and data modeling</li>
<li>Skilled in ad hoc analysis, data modeling, and applying machine learning techniques</li>
<li>Proficient in SQL and complex queries</li>
<li>Experience with scripting tools and languages (e.g., batch files, VBA for Excel)</li>
<li>Strong problem-solving skills, attention to detail, and ability to work independently on moderately complex projects</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Power BI, Python, SQL, ad hoc analysis, data modeling, machine learning, scripting tools, batch files, VBA for Excel</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is one of the world&apos;s largest insurance groups, providing a wide range of insurance and financial services products with gross written premium well over US$25 Billion.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/n3cFuhdgPzXehrqU3Y2DVi/remote-fbs-business-solution-analyst-(data-analyst)-in-brazil-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>630a981c-b19</externalid>
      <Title>Digital Marketing Architect - Consumer Goods, Retail and Logistics - Germany</Title>
      <Description><![CDATA[<p>Boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges. We are growing and are looking for people to join our team.</p>
<p>The Role --------</p>
<p>We are seeking a visionary and experienced Digital Marketing Architect to design, build, and optimize our digital marketing technology stack. You will be the central owner of our MarTech blueprint, ensuring all platforms work in harmony to support our omnichannel retail strategy.</p>
<p>Key Responsibilities -------------------</p>
<p><strong>MarTech Stack Architecture</strong></p>
<p>Design and govern the end-to-end architecture of our marketing technology stack, including our Customer Data Platform (CDP), e-commerce platform, personalization engine, loyalty platform, and campaign management tools.</p>
<p><strong>Omnichannel Customer Journey Design</strong></p>
<p>Architect the data flows and system integrations necessary to create a unified 360-degree customer view, connecting data from online touchpoints (website, app) and offline systems (Point-of-Sale, in-store events).</p>
<p><strong>Data &amp; Personalization Strategy</strong></p>
<p>In collaboration with the data team, design the marketing data model within our CDP. Architect solutions that leverage this data to deliver real-time, personalized content, product recommendations, and offers across all digital channels.</p>
<p><strong>Technology Evaluation &amp; Roadmap</strong></p>
<p>Lead the discovery, evaluation, and selection of new marketing technologies. Develop and maintain a multi-year MarTech roadmap that aligns with strategic business objectives for growth and customer experience.</p>
<p><strong>Collaboration &amp; Enablement</strong></p>
<p>Work closely with brand marketers, e-commerce managers, and CRM specialists to understand their needs and translate them into technical requirements and solutions. Empower teams by ensuring the technology is effective and user-friendly.</p>
<p>Qualifications &amp; Skills ----------------------</p>
<p><strong>Experience</strong></p>
<p>8+ years in digital marketing technology, marketing operations, or solutions architecture. Direct experience within the retail or e-commerce industry is essential.</p>
<p><strong>MarTech Platform Expertise</strong></p>
<p>Proven hands-on experience architecting and integrating core retail marketing platforms:</p>
<ul>
<li>Customer Data Platforms (CDP): e.g., Segment, Tealium, Bloomreach</li>
<li>E-commerce Platforms: e.g., Shopify Plus, Salesforce Commerce Cloud, Magento (Adobe Commerce), Commercetools</li>
<li>Marketing/CRM Platforms: e.g., Salesforce Marketing Cloud, Braze, Emarsys</li>
<li>Personalization Engines: e.g., Dynamic Yield, Klevu, Nosto</li>
</ul>
<p><strong>Technical Proficiency</strong></p>
<ul>
<li>Strong understanding of APIs (REST, GraphQL) and data integration patterns.</li>
<li>Proficiency in SQL for data validation and analysis.</li>
<li>Solid understanding of data modeling, schema design, and identity resolution concepts.</li>
<li>Familiarity with web technologies (JavaScript, HTML, CSS) and tag management systems (Google Tag Manager).</li>
</ul>
<p><strong>Retail Business Acumen</strong></p>
<p>Deep understanding of key retail metrics (e.g., Customer Lifetime Value - CLV, Conversion Rate, Average Order Value - AOV) and the ability to connect technology solutions to business outcomes.</p>
<p>Preferred Qualifications ----------------------</p>
<ul>
<li>Experience with headless commerce and composable architecture.</li>
<li>Familiarity with loyalty program platforms and their integration.</li>
<li>Knowledge of Digital Asset Management (DAM) and Product Information Management (PIM) systems.</li>
<li>Experience in both B2C and D2C retail environments.</li>
<li>Professional fluency in German is a strong asset.</li>
</ul>
<p>About your team ----------------</p>
<p>Our CRL (Consumer Goods, retail &amp; Logistics) practice helps some of the largest global firms and most recognizable local brands solve their biggest challenges in today’s age of constant disruption. With diverse services spanning growth strategy and new product innovation, to omni-channel customer experience, supply chain resiliency and AI-driven new business models, we help clients shape and achieve their growth agenda for a sustainable future.</p>
<p>About Infosys Consulting -------------------------</p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal goals. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Customer Data Platforms (CDP), E-commerce Platforms, Marketing/CRM Platforms, Personalization Engines, APIs (REST, GraphQL), SQL, Data modeling, Schema design, Identity resolution, Web technologies (JavaScript, HTML, CSS), Tag management systems (Google Tag Manager), Retail business acumen, Headless commerce and composable architecture, Loyalty program platforms, Digital Asset Management (DAM), Product Information Management (PIM) systems, B2C and D2C retail environments, German language</Skills>
      <Category>Marketing</Category>
      <Industry>Consulting</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting is a globally renowned management consulting firm that works with market leading brands across sectors. Our parent organization Infosys is a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/d8M3v8FZmkKSxx3yZUqYJ7/hybrid-digital-marketing-architect---consumer-goods%2C-retail-and-logistics---germany-in-munich-at-infosys-consulting---europe</Applyto>
      <Location>Munich, Bavaria, Germany</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ba5e5f71-701</externalid>
      <Title>FBS Associate Analytics Engineer</Title>
      <Description><![CDATA[<p>FBS Associate Analytics Engineer</p>
<p>We are seeking an FBS Associate Analytics Engineer to join our team. As an FBS Associate Analytics Engineer, you will play a key role in transforming raw data into structured, high-quality datasets that are ready for analysis. You will work on low to moderately complex business problems, receiving coaching and guidance from data leadership. Your primary focus will be on end-to-end data workflow, including data ingestion, transformation, modeling, and validation to enable data-driven decision-making across the organization.</p>
<p>Responsibilities</p>
<ul>
<li>Emerging data infrastructure development with coaching and guidance: Pipeline Design and Development – Architects and builds scalable data pipelines using modern ETL (Extract, Load, Transform) tools and frameworks such as DBT (Data Build Tool), Apache Airflow, or similar.</li>
<li>Automates data ingestion processes from various sources including databases, APIs, and third party services.</li>
<li>Data Storage and Management - Designs and implements data warehousing solutions using platforms like Snowflake, Redshift, or BigQuery.</li>
<li>Optimizes storage solutions for performance, cost efficiency, and scalability.</li>
<li>Data Modeling - Develops and maintains logical and physical data models to support business analytics.</li>
<li>Creates and manages dimensional models, star/snowflake schemas, and other data structures.</li>
<li>Data Transformation - Transforms raw data into clean, organized, and analytics-ready datasets using SQL, Python, or other relevant languages.</li>
<li>Data Quality Assurance - Conducts data validation and consistency checks to ensure the accuracy and reliability of data.</li>
<li>Technology Stack - Utilizes modern data tools and technologies such as SQL, Python, dbt, Airflow, and cloud platforms like AWS, Azure, or GCP.</li>
<li>Continuous Learning – Stays updated with the latest trends, best practices, and advancements in data engineering and analytics.</li>
<li>Participates in professional development opportunities to enhance technical and analytical skills.</li>
<li>Provides code as requirements for hardening and operationalization by technology with significant coaching, guidance, and feedback.</li>
<li>Performs other duties as assigned.</li>
</ul>
<p>Requirements</p>
<ul>
<li>1+ year of experience working on a Data Environment</li>
<li>Good Analytics mindset</li>
<li>Knowledge in SQL</li>
<li>Strong verbal communication and listening skills.</li>
<li>Demonstrated written communication skills.</li>
<li>Demonstrated analytical skills.</li>
<li>Demonstrated problem solving skills.</li>
<li>Effective interpersonal skills.</li>
<li>Seeks to acquire knowledge in area of specialty.</li>
<li>Possesses strong technical aptitude. Basic experience with SQL or similar, dimensional modeling, pipeline orchestration, building data pipelines to transform data, and BI visualizations.</li>
<li>Python experience is a plus</li>
</ul>
<p>Benefits</p>
<p>This position comes with a competitive compensation and benefits package.</p>
<ul>
<li>A competitive salary and performance-based bonuses.</li>
<li>Comprehensive benefits package.</li>
<li>Flexible work arrangements (remote and/or office-based).</li>
<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>
<li>Private Health Insurance.</li>
<li>Paid Time Off.</li>
<li>Training &amp; Development opportunities in partnership with renowned companies.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, DBT, Apache Airflow, Snowflake, Redshift, BigQuery, Data Modeling, Data Transformation, Data Quality Assurance, Cloud Platforms, Python experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with nearly 350,000 employees across 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/jaxxjRWH9XxkRbr1TCrPb5/remote-fbs-associate-analytics-engineer-in-mexico-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>dcfed817-412</externalid>
      <Title>FBS Senior Data Domain Architect</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Domain Architect to join our team. As a Senior Data Domain Architect, you will design and develop Data/Domain IT architecture solutions to business problems in alignment with the Enterprise Architecture direction and standards.</p>
<p><strong>What to expect on your journey with us:</strong></p>
<ul>
<li>A solid and innovative company with a strong market presence</li>
<li>A dynamic, diverse, and multicultural work environment</li>
<li>Leaders with deep market knowledge and strategic vision</li>
<li>Continuous learning and development</li>
</ul>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Utilize in-depth conceptual and practical knowledge in Domain Architecture and basic knowledge of related job disciplines to perform complex technical planning, architecture development and modification of specifications for Domain solution delivery</li>
<li>Solve complex problems and partner effectively to execute broad, continuous Domain level architecture improvement roadmaps that impacts the organization</li>
<li>Work independently, receives minimal guidance and direction to solve for and influence Enterprise and System architecture through Domain level knowledge</li>
<li>Review high level design to ensure alignment to Solution Architecture</li>
<li>May lead projects or project steps within a broader project or may have accountability for on-going activities or objectives</li>
<li>Mentor developers and create reference implementations/frameworks</li>
<li>Partner with System Architects to elaborate capabilities and features</li>
<li>Deliver single domain architecture solutions and execute continuous domain level architecture improvement roadmap. Actively supports design and steering of a continuous delivery pipeline</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Over 6 years of experience as a senior domain architect for Data domains</li>
<li>Advanced English Level</li>
<li>Masters&#39; degree (PLUS)</li>
<li>Insurance Experience (PLUS) Financial Services (PLUS)</li>
</ul>
<p><strong>Technical &amp; Business Skills:</strong></p>
<ul>
<li>ETL/ELT Tools (Informatica, DBT) - Advanced (7+ Years)</li>
<li>Data Architecture / Data Modeling – Advanced (MUST)</li>
<li>Data Warehouse – Advanced (MUST)</li>
<li>Cloud Data Platforms - Advanced</li>
<li>Data Integration Tools – Advanced</li>
<li>Snowflake or Databricks - Intermediate (4-6 Years) MUST</li>
<li>Any Cloud - Intermediate (4-6 Years)</li>
<li>Power BI or Tableau - Intermediate (4-6 Years)</li>
<li>Data Science tools (Sagemaker, Databricks) - Intermediate (4-6 Years)</li>
<li>Data Lakehouse – Intermediate (MUST)</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>A competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Private Health Insurance</li>
<li>Paid Time Off</li>
<li>Training &amp; Development opportunities in partnership with renowned companies</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>ETL/ELT Tools (Informatica, DBT), Data Architecture / Data Modeling, Data Warehouse, Cloud Data Platforms, Data Integration Tools, Snowflake or Databricks, Any Cloud, Power BI or Tableau, Data Science tools (Sagemaker, Databricks), Data Lakehouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with nearly 350,000 employees across over 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/x7tKXYFBB815ca6oBV5T2E/remote-fbs-senior-data-domain-architect-in-brazil-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>3585d7e1-453</externalid>
      <Title>PLM Consultant Role:- Junior level</Title>
      <Description><![CDATA[<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You&#39;ll be part of an entrepreneurial, high-growth environment where you can work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>
<p>Our Supply Chain Management Team supports the world’s most recognizable brands to design and implement a future ready operating model for supply chains that balance between Cost, Speed and Service. We enable business growth through building supply chain resilience and agility from visibility to orchestration to settlement, shifting focus from cost to high value work that transforms customer experience.</p>
<p>We are a global team and operating from various locations in Europe, North America, and India. We focus on consulting services around the topics of Customer Centricity, Procurement Experience, Supply Chain Planning, Smart Operations and Agile Logistics.</p>
<p>We’re looking for a PLM Consultant to design, implement, and optimize PLM solutions for clients across industries. You’ll combine technical expertise with consulting skills to deliver impactful results.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Gather and analyze client PLM requirements and processes.</li>
<li>Design and implement PLM solutions aligned with client objectives, including workflow design, data management, and integration with ERP/CRM systems.</li>
<li>Configure and implement PLM systems (Siemens Teamcenter, PTC Windchill, Dassault ENOVIA, SAP PLM).</li>
<li>Assist in creating PLM roadmaps, governance models, and best practices for clients.</li>
<li>Support data migration, integration, testing, and user training.</li>
<li>Recommend PLM best practices and process improvements.</li>
<li>Collaborate with cross-functional teams to ensure successful project delivery.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>3-8 years of experience in PLM implementation</li>
<li>3-8 years of consulting experience.</li>
<li>Experience with PLM integration with ERP, MES, or CRM systems.</li>
<li>Hands-on experience with PLM tools such as Siemens Teamcenter, PTC Windchill, Dassault ENOVIA, or SAP PLM.</li>
<li>Strong understanding of product development processes, BOM management, CAD integration, and change management.</li>
<li>Experience with PLM system configuration, workflows, and data modeling.</li>
<li>Ability to manage multiple client engagements and deliver results in a fast-paced consulting environment.</li>
<li>Knowledge of industry-specific product development processes (e.g. Manufacturing, Consumer Goods and Life-Sciences/ Pharmaceuticals).</li>
<li>Certification in relevant PLM tools</li>
<li>· Have experience in gathering, validating, synthesizing, documenting, and communicating data and information for a range of audiences</li>
<li>· Have excellent interpersonal skills and strong written and verbal communication skills in country’s official language(s) (C2 proficiency) and English (C2 proficiency),</li>
<li>Have project-related mobility/willingness to travel</li>
</ul>
<p>_Given that this is just a short snapshot of the role we encourage you to apply even if you don&#39;t meet all the requirements listed above. We are looking for individuals who strive to make an impact and are eager to learn. If this sounds like you and you feel you have the skills and experience required, then please apply now.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>PLM implementation, consulting experience, PLM integration with ERP, MES, or CRM systems, PLM tools such as Siemens Teamcenter, PTC Windchill, Dassault ENOVIA, or SAP PLM, product development processes, BOM management, CAD integration, and change management, PLM system configuration, workflows, and data modeling, industry-specific product development processes (e.g. Manufacturing, Consumer Goods and Life-Sciences/ Pharmaceuticals), certification in relevant PLM tools</Skills>
      <Category>Engineering</Category>
      <Industry>Consulting</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a global management consulting firm that works with market leading brands across sectors. The company has a workforce of 300,000 employees.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/4Zm1QqGUTGkV6Mfa3mKFxT/remote-plm-consultant-role%3A--junior-level-in-london-at-infosys-consulting---europe</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>4d59ed28-85d</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>As a Software Engineer at Ford Motor Company, you will be responsible for designing, developing, testing, and maintaining software applications and products to meet customer needs both on-prem and cloud native. You will work on a Balanced Product Team and collaborate with the Product Manager, Product Designer, and other Software Engineers to deliver analytic solutions.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Participate in and/or lead the development of requirements, features, user stories, use cases, and test cases.</li>
<li>Author process and design documents.</li>
<li>Work with the Business Customer, Product Owner, Architects, Product Designer, Software Engineers, and Security Controls Champion on solution design, development, and deployment.</li>
<li>Generate metrics, perform user access authorization, perform password maintenance, and build deployment pipelines.</li>
<li>Participate and/or lead incident, problem, change, and service request-related activities, including root cause analysis (RCA) and proactive problem management/defect prevention activities.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>5+ years of experience in Software Engineering.</li>
<li>Bachelor&#39;s degree in computer science, computer engineering, or a combination of education and equivalent experience.</li>
<li>Design, develop, and deploy scalable and robust software solutions that integrate and leverage state-of-the-art Generative AI models (e.g., LLMs, Diffusion Models) into production systems and applications.</li>
<li>Build and optimize the infrastructure, pipelines, and tools necessary for efficient training, fine-tuning, evaluation, and serving of Generative AI models, ensuring high performance, reliability, and cost-effectiveness.</li>
<li>1+ year of experience with developing for and deploying to cloud platforms (e.g., GCP, Azure).</li>
<li>Implement and optimize cloud services and tools (e.g., Terraform, BigQuery, GCP).</li>
<li>2+ years of REST API or related development.</li>
<li>2+ years of experience in development using a combination of the following technologies:</li>
<li>Languages: Java, JS, TS, Python</li>
<li>Frontend frameworks: Angular, React</li>
<li>Backend frameworks: Spring Boot, Node</li>
<li>Proven experience understanding, practicing, and advocating for software engineering disciplines from Clean Code, Software Artmanship, and Lean, including:</li>
<li>Paired/Mobbing programming</li>
<li>Test-first/Test Driven Development (TDD)</li>
<li>Evolutionary design</li>
<li>Minimum Viable Product</li>
<li>Willingness to collaborate daily with team members.</li>
<li>A strong curiosity around how to best use technology to amaze and delight our customers.</li>
<li>Using CI/CD tools and pipelines (e.g., Tekton, Jenkins, GIT Action, Cloud Build, etc.).</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Immediate medical, dental, and prescription drug coverage</li>
<li>Flexible family care, parental leave, new parent ramp-up programs, subsidized back-up child care, and more</li>
<li>Vehicle discount program for employees and family members, and management leases</li>
<li>Tuition assistance</li>
<li>Established and active employee resource groups</li>
<li>Paid time off for individual and team community service</li>
<li>A generous schedule of paid holidays, including the week between Christmas and New Year&#39;s Day</li>
<li>Paid time off and the option to purchase additional vacation time.</li>
</ul>
<p><strong>Salary</strong></p>
<p>This position is a salary grade 8 and ranges from $113,580 to $190,500.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$113,580-$190,500</Salaryrange>
      <Skills>Generative AI models, Cloud platforms, Cloud services and tools, REST API, Java, JS, TS, Python, Angular, React, Spring Boot, Node, Clean Code, Software Artmanship, Lean, Paired/Mobbing programming, Test-first/Test Driven Development (TDD), Evolutionary design, Minimum Viable Product, CI/CD tools and pipelines, Full-stack experience, Machine learning, Mathematical modeling, Data analysis, CA Agile Central (Rally, JIRA), Backlogs, Iterations, User stories, Microservices, Fundamental data modeling</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Ford Motor Company is a multinational automaker that designs, manufactures, and markets vehicles and automotive-related products.</Employerdescription>
      <Employerwebsite>https://efds.fa.em5.oraclecloud.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/59660</Applyto>
      <Location>Dearborn, MI</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>0841fcf4-9ab</externalid>
      <Title>Data Engineer SE - II</Title>
      <Description><![CDATA[<p>We are on a mission to rid the world of bad customer service by “mobilizing” the way help is delivered. Today’s consumers want an always-available customer service experience that leaves them feeling valued and respected.</p>
<p>Helpshift helps B2B brands deliver this modern customer service experience through a mobile-first approach. We have changed how conversations take place, moving the conversation away from a slow, outdated email and desktop experience to an in-app chat experience that allows users to interact with brands in their own time.</p>
<p>Through our market-leading AI-powered chatbots and automation, we help brands deliver instant and rapid resolutions. Because agents play a key role in delivering help, our platform gives agents superpowers with automation and AI that simply works.</p>
<p><strong>About the Team</strong></p>
<p>Consumers care first and foremost about having their time valued by brands. Brands need insights into their customer service operation to serve their consumers effectively. Such insights and analytics are delivered through various data products like in-app analytics dashboards and data-sharing integrations.</p>
<p>The data platform team is responsible for designing, building, and maintaining the data infrastructure that enables such data and analytics products at scale. We build and manage data pipelines, databases, and other data structures to ensure that the data is reliable, accurate, and easily accessible.</p>
<p>We also enable internal stakeholders with business intelligence and machine learning teams with data ops. This team manages the platform that handles 2 Million events per minute and processes 1+ terabytes of data daily.</p>
<p><strong>About the Role</strong></p>
<ul>
<li>Building maintainable data pipelines both for data ingestion and operational analytics for data collected from 2 billion devices and 900M Monthly active users</li>
<li>Building customer-facing analytics products that deliver actionable insights and data, easily detect anomalies</li>
<li>Collaborating with data stakeholders to see what their data needs are and being a part of the analysis process</li>
<li>Write design specifications, test, deployment, and scaling plans for the data pipelines</li>
<li>Mentor people in the team &amp; organization</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of experience in building and running data pipelines that scale for TBs of data</li>
<li>Proficiency in high-level object-oriented programming language (Python or Java) is must</li>
<li>Experience in Cloud data platforms like Snowflake and AWS, EMR/Athena is a must</li>
<li>Experience in building modern data lakehouse architectures using Snowflake and columnar formats like Apache Iceberg/Hudi, Parquet, etc</li>
<li>Proficiency in Data modeling, SQL query profiling, and data warehousing skills is a must</li>
<li>Experience in distributed data processing engines like Apache Spark, Apache Flink, Datalfow/Apache Beam, etc</li>
<li>Knowledge of workflow orchestrators like Airflow, Dasgter, etc is a plus</li>
<li>Data visualization skills are a plus (PowerBI, Metabase, Tableau, Hex, Sigma, etc)</li>
<li>Excellent verbal and written communication skills</li>
<li>Bachelor’s Degree in Computer Science (or equivalent)</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Hybrid setup</li>
<li>Worker&#39;s insurance</li>
<li>Paid Time Offs</li>
<li>Other employee benefits to be discussed by our Talent Acquisition team in India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Snowflake, AWS, EMR/Athena, Apache Iceberg/Hudi, Parquet, Apache Spark, Apache Flink, Datalflow/Apache Beam, Airflow, Data modeling, SQL query profiling, data warehousing, PowerBI, Metabase, Tableau, Hex, Sigma</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Helpshift</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Helpshift is a company that provides a mobile-first customer service experience for B2B brands. It has over 900 million active monthly consumers and is used by hundreds of leading brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/D451DB2325</Applyto>
      <Location>Pune, Maharashtra, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>8da705c0-ccb</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>Are you passionate about building infrastructure that powers billions of ad impressions daily? Join us to shape the backbone of a rapidly growing ad platform—where scale, reliability, and data-driven innovation are at the heart of everything we do.</p>
<p>As a Principal Software Engineer on the Bing Ads team, you will be responsible for designing and developing near real-time services, preparing data stores, and integrating them with other ad-serving components. Collaboration between and across teams is essential part of this role, as you will engage with partners to meet mutual objectives.</p>
<p>This role will enable you to gain insights into the Bing ad serving platform, collaborate closely with data scientists, and develop expertise in working with individuals responsible for different components of the ad infrastructure. You will have the opportunity to grow your skills, learn from industry experts, and continuously expand your knowledge in a dynamic and innovative environment.</p>
<p>This role allows flexible working hours with partial work from home.</p>
<p>Responsibilities:</p>
<ul>
<li>Independently implement high-performance solutions across teams while maintaining a quality checklist.</li>
<li>Create and monitor telemetry data and influence analytics to better identify patterns that reveal errors and unexpected problems.</li>
<li>Lead by example and mentor others to produce extensible and maintainable code used across products.</li>
<li>Spearhead efforts to optimize, debug, refactor, and reuse code to improve performance, maintainability, effectiveness, and return on investment (ROI).</li>
<li>Oversee the design and development of products, identifying other teams and technologies that will be leveraged, how they will interact, and when your system may provide support to others.</li>
<li>Lead efforts to determine back-end dependencies associated with the product, ensuring appropriate security and performance, driving reliability in the solutions, and optimizing dependency chains for the solution.</li>
<li>Respond to incidents and complex issues by identifying and troubleshooting the issue, deploying the appropriate fixes, and implementing automations to prevent recurring issues.</li>
<li>Follow prescriptive guidance for security, privacy, and compliance standards.</li>
<li>Collaborate within and across teams by proactively and systematically sharing information.</li>
<li>Resolve conflicts across teams and engage with partners to meet mutual objectives.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, C, C++, Python or JavaScript OR equivalent experience.</li>
<li>4+ years technical experience working with large-scale cloud or distributed data systems.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, C, C++, Python or JavaScript OR Bachelor’s Degree in Computer Science or related technical field AND 10+ years technical engineering experience with coding in languages including, but not limited to, C#, Java, C, C++, Python or JavaScript OR equivalent experience.</li>
<li>8+ years technical experience in software development, service engineering, or systems engineering.</li>
<li>3+ years experience in data science, data modeling, or data engineering.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>C#, Java, C, C++, Python, JavaScript, large-scale cloud or distributed data systems, data science, data modeling, data engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-10/</Applyto>
      <Location>Multiple Locations, United States</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>a8eb2e15-0bb</externalid>
      <Title>Senior Business Systems Analyst, Finance Systems</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>We are seeking an experienced Senior Business Systems Analyst to join our Finance Systems team at Anthropic. In this role, you will serve as the internal functional lead for our Workday Financials implementation, owning the design and configuration of the Financial Data Model (FDM), Chart of Accounts, and dimensional structures that will serve as the source of truth for financial reporting.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li><strong>ERP Core Financials Implementation:</strong> Serve as internal functional lead for Workday Financials implementation, partnering with consultants to drive configuration decisions, validate designs, and ensure business requirements are met</li>
</ul>
<ul>
<li><strong>Financial Data Model (FDM) Design:</strong> Own the design and configuration of Chart of Accounts, Worktags, dimensional hierarchies, and Accounting Books that will serve as the source of truth for all financial reporting, ensuring support for both GAAP and Management reporting requirements</li>
</ul>
<ul>
<li><strong>Prism Analytics Development:</strong> Develop and maintain Prism/Accounting Center solutions from source analysis and ingestion design through build, testing, cutover, and hypercare, including integration with external data sources like BigQuery and Pigment</li>
</ul>
<ul>
<li><strong>Requirements Gathering &amp; Reporting:</strong> Gather business requirements from Finance, Accounting, and FP&amp;A stakeholders, translating them into hands-on development of executive reporting, dashboards, and analytics solutions</li>
</ul>
<ul>
<li><strong>Workshop Participation &amp; Solution Design:</strong> Participate in implementation workshops, challenge requirements, and translate business needs into buildable designs and testable acceptance criteria; manage defects and data quality issues throughout the project lifecycle</li>
</ul>
<ul>
<li><strong>Cross-Functional Collaboration:</strong> Collaborate with Integrations, Security, and Financials configuration teams to align master data, journals, controls, and performance service level agreements; partner with Data Infrastructure and BizTech teams on system integrations</li>
</ul>
<ul>
<li><strong>Cutover &amp; Hypercare Planning:</strong> Prepare cutover plans, data migration strategies, reconciliation frameworks, and hypercare plans; document data lineage, controls, and audit artifacts to support SOX compliance requirements</li>
</ul>
<ul>
<li><strong>Platform Expansion &amp; Adoption:</strong> Work closely with engineering teams and business stakeholders to drive ongoing expansion and adoption of the Workday platform, identifying opportunities for process improvement and automation</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 8+ years of experience in finance systems, ERP implementation, or business systems analysis roles, with at least 5 years of hands-on Workday Financials experience</li>
</ul>
<ul>
<li>Possess deep expertise in Workday Financial Data Model (FDM), including Chart of Accounts design, Worktags configuration, dimensional hierarchies, and Accounting Books setup</li>
</ul>
<ul>
<li>Have strong experience with Workday Prism Analytics, including data modeling, source integration, calculated fields, and report development</li>
</ul>
<ul>
<li>Are skilled at translating complex business requirements into technical solutions, bridging the gap between finance stakeholders and technical implementation teams</li>
</ul>
<ul>
<li>Have experience with full ERP implementation lifecycles, including requirements gathering, configuration, testing, data migration, cutover planning, and hypercare</li>
</ul>
<ul>
<li>Possess strong understanding of financial accounting processes including General Ledger, multi-entity consolidation, intercompany accounting, and management reporting</li>
</ul>
<ul>
<li>Have excellent stakeholder management and communication skills, with ability to work effectively with finance leadership, accounting teams, and technical partners</li>
</ul>
<ul>
<li>Demonstrate strong analytical and problem-solving skills with attention to detail and commitment to data accuracy and integrity</li>
</ul>
<ul>
<li>Are comfortable working in fast-paced, high-growth environments with evolving requirements and tight timelines</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Background in accounting, finance, or CPA certification with understanding of GAAP/IFRS reporting requirements</li>
</ul>
<ul>
<li>Experience with Workday Accounting Center for complex journal automation and subledger accounting</li>
</ul>
<ul>
<li>Technical proficiency with SQL, Python, or scripting languages for data analysis and integration support</li>
</ul>
<ul>
<li>Experience integrating Workday with external data platforms such as BigQuery or cloud data warehouses</li>
</ul>
<ul>
<li>Knowledge of SOX compliance requirements and internal controls for financial systems</li>
</ul>
<ul>
<li>Experience with EPM/FP&amp;A systems such as Pigment, Anaplan, or Adaptive Planning and their integration with ERP</li>
</ul>
<ul>
<li>Prior experience at high-growth technology companies scaling toward IPO readiness</li>
</ul>
<ul>
<li>Familiarity with Workday HCM and understanding of HCM-Financials integration points</li>
</ul>
<ul>
<li>Experience with data migration tools, ETL processes, and reconciliation frameworks for ERP implementations</li>
</ul>
<p>The annual compensation range for this role is $list</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>list</Salaryrange>
      <Skills>Workday Financials, Financial Data Model (FDM), Chart of Accounts, Worktags, Dimensional Hierarchies, Accounting Books, Prism Analytics, Data Modeling, Source Integration, Calculated Fields, Report Development, ERP Implementation, Requirements Gathering, Configuration, Testing, Data Migration, Cutover Planning, Hypercare, Financial Accounting, General Ledger, Multi-Entity Consolidation, Intercompany Accounting, Management Reporting, Stakeholder Management, Communication, Analytical Skills, Problem-Solving Skills, Data Accuracy, Integrity, Workday Accounting Center, SQL, Python, Scripting Languages, BigQuery, Cloud Data Warehouses, SOX Compliance, Internal Controls, EPM/FP&amp;A Systems, Pigment, Anaplan, Adaptive Planning, ERP Integration, High-Growth Technology Companies, IPO Readiness, Workday HCM, HCM-Financials Integration, Data Migration Tools, ETL Processes, Reconciliation Frameworks</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company is working towards public company readiness.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4991194008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>d6450ee6-847</externalid>
      <Title>Data Infrastructure Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Cursor ships daily. Every release leaves signals behind: telemetry, prompts, completions, agent runs, sessions. Those signals power model improvement, evals, and experimentation. Data infrastructure is what turns them into something teams can trust.</p>
<p>A lot of systems here started simple so we could move fast. Over time, the constraints change and the “good enough” version becomes the bottleneck. This role owns the full ladder: patch what should be patched, redesign what should be redesigned, ship the replacement, and operate it.</p>
<p>Privacy guarantees are part of correctness. What we can retain and use depends on Privacy Mode and org configuration, and getting that wrong breaks a product promise. We choose work by business impact: what blocks product and model teams today, and what will block them next month.</p>
<p><strong>Sample projects include...</strong></p>
<ul>
<li>A core pipeline started as a pragmatic reuse of infrastructure built for something else. It works, but it cannot guarantee properties downstream consumers now need (for example, point-in-time consistency). You design and ship the replacement while keeping the existing system running.</li>
</ul>
<ul>
<li>A new product surface ships without instrumentation. You talk to the team, define what needs to be captured, and wire it through before the absence becomes anyone else’s problem.</li>
</ul>
<ul>
<li>Eval coverage drops. You trace it to an instrumentation gap introduced weeks ago by a product change nobody flagged. You fix the gap, add a contract so it cannot recur, and ship the dashboard that would have caught it earlier.</li>
</ul>
<ul>
<li>Multiple consumers depend on overlapping data. You design schema evolution and validation so changes in one place do not silently degrade the others.</li>
</ul>
<ul>
<li>Storage costs rise faster than usage. You decide what is worth keeping, implement retention and compression, and delete what is not.</li>
</ul>
<p><strong>What we&#39;re looking for</strong></p>
<p>We’re looking for someone who has built real systems at scale and cares about correctness, cost, and ergonomics.</p>
<p>Strong signals include:</p>
<ul>
<li>Deep experience with Spark (Databricks or open-source Spark both count)</li>
</ul>
<ul>
<li>Production experience with Ray Data</li>
</ul>
<ul>
<li>Hands-on ownership of large data pipelines and storage systems</li>
</ul>
<ul>
<li>Comfort debugging performance issues across client instrumentation, streaming, storage, and model-facing workflows, as well as, compute, storage, and networking layers</li>
</ul>
<ul>
<li>Clear thinking about data modeling and long-term maintainability</li>
</ul>
<ul>
<li>You have good judgment about when to patch and when to rebuild</li>
</ul>
<p>Nice to have</p>
<ul>
<li>Experience running or scaling ClickHouse</li>
</ul>
<ul>
<li>Familiarity with dbt, Dagster, or similar orchestration and modeling tools</li>
</ul>
<p>We&#39;re in-person with cozy offices in North Beach, San Francisco and Manhattan, New York, replete with well-stocked libraries.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Spark, Ray Data, data pipelines, storage systems, debugging performance issues, data modeling, long-term maintainability, ClickHouse, dbt, Dagster</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a technology company that ships daily releases, leaving behind signals that power model improvement, evals, and experimentation. The company has multiple offices in North Beach, San Francisco and Manhattan, New York.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/software-engineer-data-infrastructure</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>08e0bb96-8f3</externalid>
      <Title>Strategy &amp; Operations, Support</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p><strong>Compensation</strong></p>
<ul>
<li>$240K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong> The User Operations team (Support) is central to ensuring that our customers&#39; experience with our products is nothing short of exceptional. We resolve complex issues, provide technical guidance, and support customers in maximizing value and adoption from deploying our products. We work closely with Sales, Technical Success, Product, Engineering and others to deliver the best possible experience to our customers at scale. OpenAI&#39;s customers represent a range of diverse backgrounds and maturity, from early-stage startups to established global enterprises.</p>
<p><strong>About the Role</strong> We are seeking a dynamic support strategy operator to drive strategic and operational initiatives across OpenAI’s customer support/user operations landscape. In this role, you will work closely with leaders in User Operations and across the company to help scale, mature, and optimize our support operations. Your work will span a range of strategic initiatives aimed at enhancing the customer experience and driving operational excellence, ensuring that our support organization can sustainably scale with the business&#39;s growth.</p>
<p>You’ll be responsible for deeply understanding our organization and priorities – where we’re at, where we’re going – and will work relentlessly towards planning and executing on our vision to provide best-in-class support. This role is not “creating and executing playbooks”. AI has and will continue to fundamentally change the customer experience and the way we build tools and organizations; this role requires a proactive, strategic planner and executor that can think ten steps ahead, defining the future of customer support at OpenAI and in the world.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Work within User Operations and across OpenAI to leverage AI/LLMs across the organization. While this role goes beyond just leveraging technology, it will be at the root of this role (as it is the root of our company)</li>
</ul>
<ul>
<li>Collaborate with leaders to identify, evaluate, and prioritize new strategic and operational initiatives, ensuring alignment with company goals and unique organizational objectives</li>
</ul>
<ul>
<li>Thrives in chaos, and relentlessly drives program structure conducive to progress and execution.</li>
</ul>
<ul>
<li>Work with product, engineering, and data teams to uncover and address key operational challenges and growth/scaling opportunities within the support organization, such as automation, process optimization, and enhanced self-service options.</li>
</ul>
<ul>
<li>Deep dive into the critical drivers of our support operations and identifying opportunities for optimization and innovation.</li>
</ul>
<ul>
<li>Partner with other members of the go-to-market organization, product, and partnerships to launch new initiatives – helping think through strategic impacts, executing on operational components, and driving change management.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 8+ years of experience in business operations, strategy, venture capital, private equity, consulting. You have a history of high impact work in a technical environment.</li>
</ul>
<ul>
<li>Are non-traditional – this role will require leveraging technology and concepts that are not yet established in the world (some of which you will produce yourself). “Traditional” operations may not fully apply here</li>
</ul>
<ul>
<li>Are comfortable operating at all altitudes – discussing strategy and vision with executives, and troubleshooting operations with individual contributors.</li>
</ul>
<ul>
<li>Have extensive experience in taking end-to-end ownership of large, ambiguous problems, and breaking them down into clear, actionable plans.</li>
</ul>
<ul>
<li>Have direct experience engaging with executives and senior leaders to influence and drive strategic decisions.</li>
</ul>
<ul>
<li>Are highly analytical, with strong skills in data modeling and operational forecasting to drive insights and decision-making.</li>
</ul>
<ul>
<li>Possess excellent communication and collaboration skills and are skilled in influencing stakeholders across all levels of the organization.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$240K</Salaryrange>
      <Skills>Business operations, Strategy, Venture capital, Private equity, Consulting, AI/LLMs, Data modeling, Operational forecasting, Communication, Collaboration, Influencing stakeholders, Technical environment, Program structure, Progress and execution, Automation, Process optimization, Enhanced self-service options, Change management</Skills>
      <Category>Operations</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in artificial intelligence and natural language processing. It was founded in 2015 and has since become one of the leading companies in the field of AI.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/9747412d-a23d-48ba-bddd-ad1247c360f5</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>8c8874a5-184</externalid>
      <Title>Engineering Manager, Order Systems</Title>
      <Description><![CDATA[<p><strong>Engineering Manager, Order Systems</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$293K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>Within Applied Engineering, the Financial Engineering team ensures that our products are monetized effectively to accommodate customers&#39; varying needs and scales. Collaborating closely with the GTM and Finance teams, we strive to tailor our billing stack to our evolving internal requirements.</p>
<p><strong>About the role</strong></p>
<p>We are looking for an Engineering Manager to build and operate the workflows to power quoting, tracking, and fulfillment for everything sold by OpenAI. We also build critical billing and invoicing capabilities (and partner closely on customer-facing billing experiences) to ensure financial integrity, auditability, and a seamless, transparent onboarding and billing journey for enterprise customers.</p>
<p><strong>In this role you will:</strong></p>
<ul>
<li>Lead and grow a team of engineers responsible for order management automation, focusing on reliability, correctness, and smooth customer onboarding.</li>
</ul>
<ul>
<li>Own the architecture and roadmap for order data flows into downstream systems (e.g., internal provisioning services, billing/invoicing services, and revenue workflows).</li>
</ul>
<ul>
<li>Build and operate resilient workflows and services that automate entitlements, provisioning, usage controls, SKU attribution, invoice generation/delivery, and revenue recognition—minimizing manual steps while maximizing correctness and traceability.</li>
</ul>
<ul>
<li>Improve accuracy and timeliness of provisioning, billing, and invoicing—reducing manual intervention and operational load through automation, validation, and reconciliations.</li>
</ul>
<ul>
<li>Establish strong operational practices (observability, alerting, runbooks, on-call) so systems remain healthy without constant human oversight.</li>
</ul>
<ul>
<li>Partner deeply with Sales Operations, Finance, Accounting, Support, Product, Security, and Compliance to translate requirements into resilient, auditable workflows.</li>
</ul>
<ul>
<li>Drive clarity across ambiguous problem spaces and evolving product offerings; create frameworks and abstractions that scale as OpenAI’s commercial footprint expands.</li>
</ul>
<ul>
<li>Set high engineering standards via technical direction, design reviews, mentoring, and fostering a culture of ownership and continuous improvement.</li>
</ul>
<ul>
<li>Bring strong leadership: teach and level up engineers, recruit and retain talent, manage stakeholders, and scope work to balance customer needs with realistic deliverables.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 7+ years of professional software engineering experience, including 3+ years leading engineers or managing teams.</li>
</ul>
<ul>
<li>Have experience integrating revenue platforms (e.g., CPQ, billing, invoicing, revenue recognition, entitlement systems).</li>
</ul>
<ul>
<li>Have built or operated complex commerce / SaaS ordering systems.</li>
</ul>
<ul>
<li>Are strong in systems design, data modeling, and building reliable distributed services and workflows.</li>
</ul>
<ul>
<li>Have experience in domains where correctness, auditability, and reconciliation are essential (e.g., payments, ERP/finance systems, invoicing/revenue recognition).</li>
</ul>
<ul>
<li>Can lead (or effectively partner on) customer-facing billing experiences with a focus on clarity, accessibility, and trust.</li>
</ul>
<ul>
<li>Care deeply about operational excellence and building systems that are observable, predictable, and resilient.</li>
</ul>
<ul>
<li>Communicate clearly, build trust, and lead with context rather than control; you’re energized by close cross-functional partnership.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with large-scale SaaS billing or enterprise account provisioning; usage-based and/or seat-based pricing; invoice generation at scale.</li>
</ul>
<ul>
<li>Familiarity with finance and revenue operations concepts (e.g., proration, credits/adjustments, revenue schedules).</li>
</ul>
<ul>
<li>Exposure to common CRM/ERP and payment ecosystems (e.g., CPQ, invoicing, collections, payment processors), without reliance on any single vendor.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose technologies like artificial intelligence benefit all of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$293K – $385K • Offers Equity</Salaryrange>
      <Skills>software engineering, revenue platforms, CPQ, billing, invoicing, revenue recognition, entitlement systems, complex commerce, SaaS ordering systems, systems design, data modeling, distributed services, workflows, payments, ERP/finance systems, invoicing/revenue recognition, large-scale SaaS billing, enterprise account provisioning, usage-based pricing, seat-based pricing, invoice generation at scale, finance and revenue operations, proration, credits/adjustments, revenue schedules, CRM/ERP and payment ecosystems, CPQ, invoicing, collections, payment processors</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose technologies like artificial intelligence benefit all of humanity.
OpenAI is an AI research and deployment company.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/a9a1ada5-118b-4b12-80ce-59846c4bd2bf</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>448a56f3-ab5</externalid>
      <Title>Director of Data Engineering and Agentic AI Automation, Finance</Title>
      <Description><![CDATA[<p><strong>Director of Data Engineering and Agentic AI Automation, Finance</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Finance</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$347K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>We are looking for a Director of Data Engineering and Agentic AI Automation to lead the next generation of our finance data infrastructure. As OpenAI expands its Finance operations, we need scalable and trustworthy data systems to match the pace and complexity of our growth. This includes well-modeled, auditable data for revenue recognition, financial reporting, and planning, supported by reliable pipelines that connect ERP, planning, and operational systems. You will lead a group of analytics engineers, data engineers, and AI engineers to build the data pipelines that connect our internal engineering systems with enterprise platforms such as Oracle Fusion ERP. This role will also define the roadmap for agentic AI automation, enabling intelligent workflows, process automation, and AI-driven decision-making across Finance.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build and maintain scalable, auditable data infrastructure that powers accurate financial information, with a focus on revenue recognition, compute attribution, and close automation.</li>
</ul>
<ul>
<li>Lead and grow teams of analytics engineers, data engineers, and AI engineers to deliver high-impact, intelligent data systems.</li>
</ul>
<ul>
<li>Guide work across financial close and allocations automation, B2C revenue automation from engineering systems to ERP (including reconciliation with cash and source systems), and other mission-critical financial processes.</li>
</ul>
<ul>
<li>Design and implement data pipelines connecting ERP, planning, and operational systems, including Oracle Fusion, Anaplan, and Workday.</li>
</ul>
<ul>
<li>Build and support scalable, audit-proof architecture that enables reliable financial reporting and compliance.</li>
</ul>
<ul>
<li>Develop data and AI-powered workflows that enhance forecasting accuracy, compliance automation, and operational efficiency.</li>
</ul>
<ul>
<li>Create and maintain data marts and products that support stakeholders across Revenue, FP&amp;A, Tax, Procurement, Hardware Accounting, and Controller teams.</li>
</ul>
<ul>
<li>Define and enforce best practices for data modeling, lineage, observability, and reconciliation across finance data domains.</li>
</ul>
<ul>
<li>Set the technical direction and manage team structure, mentoring engineers and overseeing contractors or system integrators to ensure delivery of high-quality outcomes.</li>
</ul>
<ul>
<li>Partner with senior leaders across Finance, Engineering, and Infrastructure to align on priorities and integrate new automation capabilities.</li>
</ul>
<ul>
<li>Ensure data systems are AI-ready and capable of supporting predictive analytics, autonomous agent workflows, and large-scale automation.</li>
</ul>
<ul>
<li>Own and maintain Tier-1 data pipelines with strict SLA, data quality, and compliance standards.</li>
</ul>
<ul>
<li>Drive the long-term roadmap for agentic AI enablement to build the foundation for “Finance on OpenAI.”</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>12+ years in data engineering, with proven experience building and managing enterprise-scale, auditable ETL pipelines and complex datasets</li>
</ul>
<ul>
<li>Proficiency in SQL and Python, with demonstrated experience in schema design, data modeling, and orchestration frameworks</li>
</ul>
<ul>
<li>Expertise in distributed data processing technologies such as Apache Spark, Kafka, and cloud-native storage (e.g., S3, ADLS)</li>
</ul>
<ul>
<li>Deep knowledge of enterprise data architecture, especially within Finance and Supply Chain</li>
</ul>
<ul>
<li>Familiarity with financial processes (close, allocations, revenue recognition) and supply chain data models (Supply and demand planning, procurement, vendor master), along with experience in ingesting data from internal engineering systems with large volumes of B2C</li>
</ul>
<ul>
<li>Experience integrating with contract manufacturers and external logistics providers is a strong plus</li>
</ul>
<ul>
<li>Strong track record of partnering with senior business stakeholders</li>
</ul>
<p><strong>Work Environment</strong></p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$347K – $490K • Offers Equity</Salaryrange>
      <Skills>SQL, Python, Apache Spark, Kafka, cloud-native storage, data modeling, orchestration frameworks, distributed data processing technologies, enterprise data architecture, financial processes, supply chain data models, ETL pipelines, complex datasets, schema design, data engineering, data infrastructure, auditable data, revenue recognition, financial reporting, planning, ERP, planning, operational systems, Oracle Fusion, Anaplan, Workday, data marts, products, stakeholders, Revenue, FP&amp;A, Tax, Procurement, Hardware Accounting, Controller, data modeling, lineage, observability, reconciliation, finance data domains, team structure, engineers, contractors, system integrators, predictive analytics, autonomous agent workflows, large-scale automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in artificial intelligence. It was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/e84e7b7e-a82e-411e-929a-615dc3080280</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>deca2d46-8fe</externalid>
      <Title>Software Engineer, Full Stack, Revenue Platform</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Full Stack, Revenue Platform</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Revenue Platform sits at the intersection of customer experience, financial precision, and enterprise-grade reliability. We build both end-user experiences and the underlying platform capabilities that power invoicing, billing, payments, and revenue recognition across OpenAI. Our work spans high-leverage customer surfaces and deep, reusable platform primitives used by multiple teams. These are foundational systems that will support OpenAI’s growth for years to come, and we’re looking for engineers who care deeply about craftsmanship, correctness, and building platforms and experiences that scale gracefully and are a joy to build on.</p>
<p><strong>About the Role</strong></p>
<p>As a Full Stack Engineer on the Revenue Platform team, you will design, build, and operate platform services and user-facing interfaces that form the backbone of OpenAI’s commercial engine. You’ll collaborate with product, design, finance, and engineering partners to deliver intuitive customer experiences while also creating shared foundations that other teams can safely and efficiently build on. Your work will directly shape the reliability, scalability, and trustworthiness of OpenAI’s most critical financial workflows.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and evolve shared full-stack platform components including APIs, data models, services, and UI primitives that power billing, subscriptions, usage-based pricing, and enterprise entitlements across OpenAI.</li>
</ul>
<ul>
<li>Design scalable, reusable revenue workflows and abstractions that other product teams can compose to launch new offerings without reinventing core billing logic.</li>
</ul>
<ul>
<li>Partner closely with product, frontend, and backend engineers to deliver end-to-end revenue capabilities, ensuring platform components are intuitive to adopt and safe to extend.</li>
</ul>
<ul>
<li>Develop internal platforms and tools used by Finance, Accounting, Sales, Support, and Go-To-Market teams to manage, audit, and reason about revenue data efficiently.</li>
</ul>
<ul>
<li>Build automation and AI-powered capabilities within the Revenue Platform to reduce manual work, surface insights, and improve operational decision-making.</li>
</ul>
<ul>
<li>Help define the architecture, standards, and contracts for a shared revenue platform, balancing flexibility for product teams with correctness, reliability, and compliance.</li>
</ul>
<ul>
<li>Collaborate cross-functionally to translate ambiguous commercial, financial, and operational requirements into durable platform primitives that scale with OpenAI’s products and customer base.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of experience building full-stack web applications with strong fundamentals across frontend, backend, and API design.</li>
</ul>
<ul>
<li>Proficiency with modern frontend frameworks (e.g., React, TypeScript) and backend technologies (Python preferred; Node, Go, or similar also welcome).</li>
</ul>
<ul>
<li>Experience designing and implementing scalable, reusable platform components and revenue workflows.</li>
</ul>
<ul>
<li>Strong understanding of financial and commercial concepts, including billing, subscriptions, and revenue recognition.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams.</li>
</ul>
<ul>
<li>Strong problem-solving skills, with the ability to analyze complex technical and business problems and develop effective solutions.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with cloud-based platforms (e.g., AWS, GCP) and containerization (e.g., Docker).</li>
</ul>
<ul>
<li>Familiarity with DevOps practices and tools (e.g., CI/CD pipelines, monitoring, logging).</li>
</ul>
<ul>
<li>Experience with data modeling and database design.</li>
</ul>
<ul>
<li>Knowledge of machine learning and AI concepts, including natural language processing and computer vision.</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
</ul>
<ul>
<li>Opportunity to work with a talented and diverse team of engineers and product managers.</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment.</li>
</ul>
<ul>
<li>Professional development opportunities, including training and mentorship.</li>
</ul>
<ul>
<li>Flexible work arrangements, including remote work options.</li>
</ul>
<ul>
<li>Access to cutting-edge technology and tools.</li>
</ul>
<ul>
<li>Recognition and rewards for outstanding performance.</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you are a motivated and talented engineer who is passionate about building scalable and reliable platform components, we encourage you to apply for this role. Please submit your resume and a cover letter that outlines your experience and qualifications for the position. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>full-stack web applications, frontend frameworks, backend technologies, API design, scalable platform components, revenue workflows, financial and commercial concepts, billing, subscriptions, revenue recognition, cloud-based platforms, containerization, DevOps practices, data modeling, database design, machine learning, AI concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that aims to ensure that artificial general intelligence benefits all of humanity. It was founded in 2015 and has since grown to become a leading player in the field of artificial intelligence.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/8427b270-8440-400c-bc18-ff24c4f0f987</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>96c1d44e-459</externalid>
      <Title>Database Administrator</Title>
      <Description><![CDATA[<p>As a database administrator on the Steam team, you will join a world-class group of software, networking, and operations engineers who facilitate the creation and distribution of social entertainment experiences to players around the globe. We&#39;re looking for hands-on SQL Server database administrators to help design and build the future of Steam, our digital distribution platform.</p>
<p>We take care of all aspects of database planning, implementation, and maintenance. We perform physical database implementation, including index tuning, storage management and partitioning, and scoping and integration of new datacenter hardware. We monitor, tune, and refactor database and server performance. We collaborate to implement database-related features, including data modeling, procedural code design, query tuning, and performance management. We design and implement schemes for data security, system availability, and disaster recovery, including backup, log shipping, mirroring and Always On.</p>
<p>We also recruit database specialists like you. Do you prefer to collaborate with peers to define the work that you pursue? Valve&#39;s work environment is unique in that we rely on employees to be self-motivated, accountable, and able to recognize where to focus energy. If you&#39;re seeking an opportunity to improve upon processes and systems while continuing to call upon your database administration skills, consider joining Valve.</p>
<p>Database administrators at Valve have significant industry experience related to database design, implementation, and maintenance. Typical skills include:</p>
<ul>
<li>Expertise with relational database management systems such as Microsoft SQL Server.</li>
<li>Database administration experience in large-scale high-availability environments.</li>
<li>Task automation and configuration management with languages such as PowerShell, Python, etc.</li>
<li>Design and management of disk storage systems</li>
<li>Capacity planning and system integration</li>
<li>Networking systems</li>
</ul>
<p>What We Offer</p>
<ul>
<li>An organization where 100% of time is dedicated as groups see fit</li>
<li>The opportunity to collaborate with experts across a range of disciplines</li>
<li>A work environment and flexible schedule in support of families and domestic partnerships</li>
<li>A culture eager to become stronger through diversity of all forms</li>
<li>Exceptional health insurance coverage</li>
<li>Unrivaled employer match for our 401(k) retirement plan</li>
<li>Generous vacation and family leave</li>
<li>On-site amenities in support of health and efficiency</li>
<li>Fertility and adoption assistance</li>
<li>Reimbursement for child care during interviews</li>
</ul>
<p>Valve strives to improve the diversity of our teams to better serve our diverse global audience. We welcome and encourage individuals from all backgrounds to apply. Candidates will be considered without regard to race, religion, color, national origin, gender, sexual orientation, age, family status, veteran status or disability status. Valve is committed to creating an inclusive work environment and does not tolerate discrimination or harassment in the workplace.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>An organization where 100% of time is dedicated as groups see fit</Salaryrange>
      <Skills>Microsoft SQL Server, database administration, task automation, disk storage systems, capacity planning, networking systems, PowerShell, Python, data modeling, procedural code design, query tuning, performance management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Valve Software</Employername>
      <Employerlogo>https://logos.yubhub.co/valvesoftware.com.png</Employerlogo>
      <Employerdescription>Valve Software is an entertainment and technology company that designs and delivers extraordinary entertainment experiences to customers. It is a world-class group of software, networking, and operations engineers.</Employerdescription>
      <Employerwebsite>https://www.valvesoftware.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://www.valvesoftware.com/en/jobs?job_id=16</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>052c1387-c77</externalid>
      <Title>Statistician</Title>
      <Description><![CDATA[<p>Though we don’t have a specific position open at this time, and we’re not actively seeking people with any specific qualifications, we’re always looking to add talented individuals to our team.</p>
<p>One thing we have at Valve is data. Lots and lots of data. We also have statisticians, and we’re always looking to add to our team. Statisticians at Valve use their outstanding empirical research skills to turn that data into insights that guide product decisions and improve our customers&#39; experiences. Some projects that Valve statisticians have worked on in the past include:</p>
<ul>
<li>Designing, developing, and validating statistical models to explain past behavior and to predict future behavior in Valve&#39;s products</li>
<li>Improving Valve&#39;s existing metrics collection and analysis techniques to expand the range of questions we&#39;re able to explore</li>
<li>Uncovering latent trends in customer behavior to improve existing products and inspire new features</li>
<li>Providing quantitative rationale to inform group decision-making processes</li>
</ul>
<p>The statisticians that we currently have at Valve have a graduate degree in Statistics, Applied Mathematics, or a related field; substantial experience with statistics and data modeling in an applied context; knowledge of statistical techniques to build predictive models; proficiency writing and shipping SQL code in a large-scale relational database environment; and proficiency in programming languages such as C++, SQL, and PHP.</p>
<p>Intrigued? Send us your resume.</p>
<p><strong>Apply now!</strong></p>
<p>What We Offer</p>
<ul>
<li>An organization where 100% of time is dedicated as groups see fit</li>
<li>The opportunity to collaborate with experts across a range of disciplines</li>
<li>A work environment and flexible schedule in support of families and domestic partnerships</li>
<li>A culture eager to become stronger through diversity of all forms</li>
<li>Exceptional health insurance coverage</li>
<li>Unrivaled employer match for our 401(k) retirement plan</li>
<li>Generous vacation and family leave</li>
<li>On-site amenities in support of health and efficiency</li>
<li>Fertility and adoption assistance</li>
<li>Reimbursement for child care during interviews</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>statistics, data modeling, SQL, C++, PHP, machine learning, data analysis, data visualization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Valve</Employername>
      <Employerlogo>https://logos.yubhub.co/valvesoftware.com.png</Employerlogo>
      <Employerdescription>Valve is a software company that develops and publishes video games. It is a privately held company with a large team of developers and designers.</Employerdescription>
      <Employerwebsite>https://www.valvesoftware.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://www.valvesoftware.com/en/jobs?job_id=19</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>2f652ea5-0df</externalid>
      <Title>Member of Technical Staff - Data Infra - MAI Superintelligence Team</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Infra - MAI Superintelligence Team at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>We are on a mission to create the largest and most advanced multimodal dataset in the world. This dataset, spanning all modalities from across the web and beyond, will power the training of the world’s most capable AI frontier models, pushing the boundaries of scale, performance, and product deployment. The AI Data Infra team at Microsoft AI is responsible for building data infrastructure to help MAI teams to generate the biggest and best training dataset. Our work involves data pipelines, Spark, Ray, Vector Databases, and all other aspects of data infra. We are looking for outstanding individuals excited about contributing to the next generation of systems that will transform the field.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and develop data pipelines that ingest enormous amounts of multi-modal training data (text, audio, images, video).</li>
<li>Own and maintain critical data infrastructures, including spark, ray, vector databases, and others.</li>
<li>Build and maintain cutting-edge infrastructure that can store and process the petabytes of data needed to power models.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ year(s) experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Data engineering, data modeling, data science, software development, and data infrastructure.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passionate about the role of data in large-scale AI model training.</li>
<li>Will thrive in a highly collaborative, fast-paced environment.</li>
<li>Have a high degree of expertise and pay close attention to details.</li>
<li>Demonstrate a proactive attitude and enthusiasm for exploring new methods and technologies.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range: $139,900 - $274,800 per year.</li>
<li>Comprehensive benefits package, including medical, dental, and vision insurance.</li>
<li>401(k) matching program.</li>
<li>Paid time off and holidays.</li>
<li>Opportunities for professional growth and development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>data engineering, data modeling, data science, software development, data infrastructure, data engineering, data modeling, data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence, machine learning, and data science. They are known for their innovative products and services that empower individuals and organizations to achieve more. Microsoft AI is committed to making a positive impact on society through their technology.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-infra-mai-superintelligence-team-3/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>a0589f8d-d17</externalid>
      <Title>Senior Data Scientist</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft Advertising are looking for a talented Senior Data Scientist at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising digital advertising technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the online marketplace.</p>
<p><strong>About the Role</strong></p>
<p>We are inviting you to join the Microsoft Advertising Data Science Team. Our team manages the marketplace, which includes monitoring business metrics, defining metrics, building analytical &amp; experimentation frameworks, and enabling leadership to make data driven decisions. The team works across Engineering, Product and Business to address complex data science problems across users, advertisers, and publishers. We are looking for a Senior Data Scientist who is willing to work in a dynamic environment to solve real life day to day problems, leveraging data science techniques. You will enjoy and be successful in this role if you are curious and willing to challenge the status quo and come up with data driven solutions to ambiguous problems.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Conduct in-depth market research across online advertising sectors, identifying emerging trends, competitive threats, and partnership opportunities that directly inform the company&#39;s quarterly strategic planning sessions</li>
<li>Develop and maintain complex data models and algorithms to drive business insights and decision-making</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results)</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>5+ years of experience in at least one of programming languages like Python/R/MATLAB/C#/Java/C++</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Committed organizational, analytical, data science skills</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary</li>
<li>Comprehensive benefits package</li>
<li>Professional development opportunities</li>
<li>Flexible work arrangements</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>data science, machine learning, statistics, programming languages, data modeling, Python, R, MATLAB, C#, Java, C++</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft Advertising</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft Advertising platform is the single stop shop for all monetization needs for Publishers and Advertisers globally. Our team manages the marketplace, which includes monitoring business metrics, defining metrics, building analytical &amp; experimentation frameworks, and enabling leadership to make data driven decisions.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-data-scientist-microsoft-advertising-2/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>a14e2c8b-37a</externalid>
      <Title>Member of Technical Staff - Data Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineer at their New York office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Data Engineer, you will be responsible for building scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>
<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>
<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>
<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Comprehensive benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading provider of artificial intelligence solutions, working to build systems that have true artificial intelligence across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-engineer/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>e7e8ccb3-342</externalid>
      <Title>Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot, you will be responsible for building scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>
<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>
<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>
<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering M5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>
<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading company in the field of artificial intelligence, working to build systems that have true artificial intelligence across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-engineering-manager-microsoft-ai-copilot/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>90fb1727-c33</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Senior Software Engineer at their Mountain View office. This role sits at the heart of driving our data strategy, ensuring the integrity and accessibility of our data and leveraging data insights to support business decisions.</p>
<p><strong>About the Role</strong></p>
<p>As a Senior Software Engineer, you will play a key role in designing and implementing scalable data solutions. You will collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions. You will develop and optimize data models to support data analytics, utilize advanced analytics techniques to extract insights from large datasets, and drive data-driven decision making.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions.</li>
<li>Develop and optimize data models to support data analytics.</li>
<li>Utilize advanced analytics techniques to extract insights from large datasets and drive data-driven decision making.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>4+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</li>
<li>Expertise in SQL and experience with RDBMS, Key Value stores.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Excellent problem-solving skills and the ability to work independently and as part of a team.</li>
<li>Solid communication skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year.</li>
<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 – $258,000 per year.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>data engineering, data analytics, software development, data modeling, Apache Spark, Hadoop, SQL, RDBMS, Key Value stores</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft empowers the world&apos;s largest advertisers to reach their maximum potential through digital advertising solutions on the Microsoft Advertising platform. They are a leading technology company with a strong presence in the global market.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-68/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>5e7ed194-bd7</externalid>
      <Title>Member of Technical Staff - Data Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineer at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Data Engineer, you will be responsible for building scalable data pipelines for sourcing, transforming, and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Build scalable data pipelines for sourcing, transforming, and publishing data assets for AI use cases.</li>
<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>
<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>
<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Comprehensive benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence, machine learning, and data science. They aim to make AI accessible to all, consumers, businesses, developers, so that everyone can realize its benefits. Microsoft AI is constantly pushing the boundaries of AI, building systems that have true artificial intelligence across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-engineer-2/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>71dd03c1-3da</externalid>
      <Title>Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot at their New York office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot, you will be responsible for building scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>
<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>
<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>
<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering M5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year.</li>
<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 - $274,800 per year</Salaryrange>
      <Skills>data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that is pushing the boundaries of artificial intelligence. They aim to make AI accessible to all, so that everyone can realize its benefits.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-engineering-manager-microsoft-ai-copilot-2/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>0b85acc3-a49</externalid>
      <Title>Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Data Engineering Manager - Microsoft AI - Copilot, you will be responsible for building scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. You will work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>
<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years experience in business analytics, data science, software development, data modeling or data engineering work.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>
<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>
<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering M5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year.</li>
<li>There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 - $274,800 per year</Salaryrange>
      <Skills>data engineering, data science, software development, data modeling, Apache Hadoop, Kafka, NoSQL, Python, Java, Spark, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that aims to empower every person and every organization on the planet to achieve more. They are known for their innovative products and services, including the Microsoft AI platform, which enables businesses to build and deploy artificial intelligence solutions.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-engineering-manager-microsoft-ai-copilot-3/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>